Artificial Intelligence Nanodegree

Computer Vision Capstone

Project: Facial Keypoint Detection


Welcome to the final Computer Vision project in the Artificial Intelligence Nanodegree program!

In this project, you’ll combine your knowledge of computer vision techniques and deep learning to build and end-to-end facial keypoint recognition system! Facial keypoints include points around the eyes, nose, and mouth on any face and are used in many applications, from facial tracking to emotion recognition.

There are three main parts to this project:

Part 1 : Investigating OpenCV, pre-processing, and face detection

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!


*Here's what you need to know to complete the project:

  1. In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested.

    a. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

  1. In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation.

    a. Each section where you will answer a question is preceded by a 'Question X' header.

    b. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional suggestions for enhancing the project beyond the minimum requirements. If you decide to pursue the "(Optional)" sections, you should include the code in this IPython notebook.

Your project submission will be evaluated based on your answers to each of the questions and the code implementations you provide.

Steps to Complete the Project

Each part of the notebook is further broken down into separate steps. Feel free to use the links below to navigate the notebook.

In this project you will get to explore a few of the many computer vision algorithms built into the OpenCV library. This expansive computer vision library is now almost 20 years old and still growing!

The project itself is broken down into three large parts, then even further into separate steps. Make sure to read through each step, and complete any sections that begin with '(IMPLEMENTATION)' in the header; these implementation sections may contain multiple TODOs that will be marked in code. For convenience, we provide links to each of these steps below.

Part 1 : Investigating OpenCV, pre-processing, and face detection

  • Step 0: Detect Faces Using a Haar Cascade Classifier
  • Step 1: Add Eye Detection
  • Step 2: De-noise an Image for Better Face Detection
  • Step 3: Blur an Image and Perform Edge Detection
  • Step 4: Automatically Hide the Identity of an Individual

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

  • Step 5: Create a CNN to Recognize Facial Keypoints
  • Step 6: Compile and Train the Model
  • Step 7: Visualize the Loss and Answer Questions

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!

  • Step 8: Build a Robust Facial Keypoints Detector (Complete the CV Pipeline)

Step 0: Detect Faces Using a Haar Cascade Classifier

Have you ever wondered how Facebook automatically tags images with your friends' faces? Or how high-end cameras automatically find and focus on a certain person's face? Applications like these depend heavily on the machine learning task known as face detection - which is the task of automatically finding faces in images containing people.

At its root face detection is a classification problem - that is a problem of distinguishing between distinct classes of things. With face detection these distinct classes are 1) images of human faces and 2) everything else.

We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the detector_architectures directory.

Import Resources

In the next python cell, we load in the required libraries for this section of the project.

In [23]:
# Import required libraries for this section

%matplotlib inline

import numpy as np
import matplotlib.pyplot as plt
import math
import cv2                     # OpenCV library for computer vision
from PIL import Image
import time 

Next, we load in and display a test image for performing face detection.

Note: by default OpenCV assumes the ordering of our image's color channels are Blue, then Green, then Red. This is slightly out of order with most image types we'll use in these experiments, whose color channels are ordered Red, then Green, then Blue. In order to switch the Blue and Red channels of our test image around we will use OpenCV's cvtColor function, which you can read more about by checking out some of its documentation located here. This is a general utility function that can do other transformations too like converting a color image to grayscale, and transforming a standard color image to HSV color space.

In [24]:
# Load in color image for face detection
image = cv2.imread('images/test_image_1.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot our image using subplots to specify a size and title
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[24]:
<matplotlib.image.AxesImage at 0x15196588>

There are a lot of people - and faces - in this picture. 13 faces to be exact! In the next code cell, we demonstrate how to use a Haar Cascade classifier to detect all the faces in this test image.

This face detector uses information about patterns of intensity in an image to reliably detect faces under varying light conditions. So, to use this face detector, we'll first convert the image from color to grayscale.

Then, we load in the fully trained architecture of the face detector -- found in the file haarcascade_frontalface_default.xml - and use it on our image to find faces!

To learn more about the parameters of the detector see this post.

In [25]:
def detect_faces(img, scaleFactor=1.5, minNeighbors = 5):

    # Convert the RGB  image to grayscale
    gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)

    # Extract the pre-trained face detector from an xml file
    face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
    
    # Detect the faces in image
    faces = face_cascade.detectMultiScale(gray_img, scaleFactor, minNeighbors)

    # Make a copy of the orginal image to draw face detections on
    img_with_detections = np.copy(img)

    # Get the bounding box for each detected face
    for (x,y,w,h) in faces:
        # Add a red bounding box to the detections image
        cv2.rectangle(img_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    
    return img_with_detections, faces

image_with_detections, faces = detect_faces(image, 4, 6)
print('Number of faces detected:', len(faces))
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[25]:
<matplotlib.image.AxesImage at 0x198b23c8>

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.


Step 1: Add Eye Detections

There are other pre-trained detectors available that use a Haar Cascade Classifier - including full human body detectors, license plate detectors, and more. A full list of the pre-trained architectures can be found here.

To test your eye detector, we'll first read in a new test image with just a single face.

In [26]:
# Load in color image for face detection
image = cv2.imread('images/james.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot the RGB image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[26]:
<matplotlib.image.AxesImage at 0xf74e470>

Notice that even though the image is a black and white image, we have read it in as a color image and so it will still need to be converted to grayscale in order to perform the most accurate face detection.

So, the next steps will be to convert this image to grayscale, then load OpenCV's face detector and run it with parameters that detect this face accurately.

In [27]:
image_with_detections, _ = detect_faces(image, 1.25, 6)

# Display the image with the detections
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detection')
ax1.imshow(image_with_detections)
Out[27]:
<matplotlib.image.AxesImage at 0xf7a7390>

(IMPLEMENTATION) Add an eye detector to the current face detection setup.

A Haar-cascade eye detector can be included in the same way that the face detector was and, in this first task, it will be your job to do just this.

To set up an eye detector, use the stored parameters of the eye cascade detector, called haarcascade_eye.xml, located in the detector_architectures subdirectory. In the next code cell, create your eye detector and store its detections.

A few notes before you get started:

First, make sure to give your loaded eye detector the variable name

eye_cascade

and give the list of eye regions you detect the variable name

eyes

Second, since we've already run the face detector over this image, you should only search for eyes within the rectangular face regions detected in faces. This will minimize false detections.

Lastly, once you've run your eye detector over the facial detection region, you should display the RGB image with both the face detection boxes (in red) and your eye detections (in green) to verify that everything works as expected.

In [28]:
def detect_eyes(img, faces, scaleFactor=1.1, minNeighbors=4):

    # Make a copy of the original image to plot rectangle detections
    img_with_detections = np.copy(img)  

    gray_img = cv2.cvtColor(image_with_detections, cv2.COLOR_RGB2GRAY)

    eye_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')

    # Detect the faces in image
    eyes = None
    for (x,y,w,h) in faces:
        tmp_img = gray_img[y:y+h,x:x+w]
        eyes = eye_cascade.detectMultiScale(tmp_img, scaleFactor, minNeighbors)
        for (ex,ey,ew,eh) in eyes:
            cv2.rectangle(img_with_detections, (x+ex,y+ey), (x+ex+ew,y+ey+eh),(0,255,0), 3)
    
    return img_with_detections, eyes

image_with_detected_faces, faces = detect_faces(image, 1.25, 6)
image_with_detected_faces_and_eyes, _ = detect_eyes(image_with_detected_faces, faces)

# Plot the image with both faces and eyes detected
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face and Eye Detection')
ax1.imshow(image_with_detected_faces_and_eyes)
Out[28]:
<matplotlib.image.AxesImage at 0xf7fcb70>

(Optional) Add face and eye detection to your laptop camera

It's time to kick it up a notch, and add face and eye detection to your laptop's camera! Afterwards, you'll be able to show off your creation like in the gif shown below - made with a completed version of the code!

Notice that not all of the detections here are perfect - and your result need not be perfect either. You should spend a small amount of time tuning the parameters of your detectors to get reasonable results, but don't hold out for perfection. If we wanted perfection we'd need to spend a ton of time tuning the parameters of each detector, cleaning up the input image frames, etc. You can think of this as more of a rapid prototype.

The next cell contains code for a wrapper function called laptop_camera_face_eye_detector that, when called, will activate your laptop's camera. You will place the relevant face and eye detection code in this wrapper function to implement face/eye detection and mark those detections on each image frame that your camera captures.

Before adding anything to the function, you can run it to get an idea of how it works - a small window should pop up showing you the live feed from your camera; you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [69]:
### Add face and eye detection to this laptop camera function 
# Make sure to draw out all faces/eyes found in each frame on the shown video feed

import cv2
import time 


def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened():
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep the video stream open
    while rval:
        image_with_faces, faces = detect_faces(frame, 1.3, 4)
        image_with_faces_and_eyes, _ = detect_eyes(image_with_faces, faces, 1.1, 4)
        
        # Plot the image from camera with all the face and eye detections marked
        cv2.imshow("face detection activated", image_with_faces_and_eyes)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key < 255: # Exit by pressing any key
            # Destroy windows 
            cv2.destroyAllWindows()
            # Make sure window closes on OSx
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
In [71]:
# Call the laptop camera face/eye detector function above
laptop_camera_go()

Step 2: De-noise an Image for Better Face Detection

Image quality is an important aspect of any computer vision task. Typically, when creating a set of images to train a deep learning network, significant care is taken to ensure that training images are free of visual noise or artifacts that hinder object detection. While computer vision algorithms - like a face detector - are typically trained on 'nice' data such as this, new test data doesn't always look so nice!

When applying a trained computer vision algorithm to a new piece of test data one often cleans it up first before feeding it in. This sort of cleaning - referred to as pre-processing - can include a number of cleaning phases like blurring, de-noising, color transformations, etc., and many of these tasks can be accomplished using OpenCV.

In this short subsection we explore OpenCV's noise-removal functionality to see how we can clean up a noisy image, which we then feed into our trained face detector.

Create a noisy image to work with

In the next cell, we create an artificial noisy version of the previous multi-face image. This is a little exaggerated - we don't typically get images that are this noisy - but image noise, or 'grainy-ness' in a digitial image - is a fairly common phenomenon.

In [37]:
# Load in the multi-face test image again
image = cv2.imread('images/test_image_1.jpg')

# Convert the image copy to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Make an array copy of this image
image_with_noise = np.asarray(image)

# Create noise - here we add noise sampled randomly from a Gaussian distribution: a common model for noise
noise_level = 40
noise = np.random.randn(image.shape[0],image.shape[1],image.shape[2])*noise_level

# Add this noise to the array image copy
image_with_noise = image_with_noise + noise

# Convert back to uint8 format
image_with_noise = np.asarray([np.uint8(np.clip(i,0,255)) for i in image_with_noise])

# Plot our noisy image!
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image')
ax1.imshow(image_with_noise)
Out[37]:
<matplotlib.image.AxesImage at 0xf8071d0>

In the context of face detection, the problem with an image like this is that - due to noise - we may miss some faces or get false detections.

In the next cell we apply the same trained OpenCV detector with the same settings as before, to see what sort of detections we get.

In [44]:
image_with_detections, faces = detect_faces(image_with_noise, 2, 4)
print('Number of faces detected:', len(faces))
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 12
Out[44]:
<matplotlib.image.AxesImage at 0x19afa278>

With this added noise we now miss one of the faces!

(IMPLEMENTATION) De-noise this image for better face detection

Time to get your hands dirty: using OpenCV's built in color image de-noising functionality called fastNlMeansDenoisingColored - de-noise this image enough so that all the faces in the image are properly detected. Once you have cleaned the image in the next cell, use the cell that follows to run our trained face detector over the cleaned image to check out its detections.

You can find its official documentation here and a useful example here.

Note: you can keep all parameters except photo_render fixed as shown in the second link above. Play around with the value of this parameter - see how it affects the resulting cleaned image.

In [39]:
denoised_image = cv2.fastNlMeansDenoisingColored(image_with_noise,None,15,15,7,21)
In [45]:
image_with_detections, faces = detect_faces(denoised_image, 2, 4)
print('Number of faces detected:', len(faces))
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image with Face Detections after denoising')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[45]:
<matplotlib.image.AxesImage at 0xfb9c278>

Step 3: Blur an Image and Perform Edge Detection

Now that we have developed a simple pipeline for detecting faces using OpenCV - let's start playing around with a few fun things we can do with all those detected faces!

Importance of Blur in Edge Detection

Edge detection is a concept that pops up almost everywhere in computer vision applications, as edge-based features (as well as features built on top of edges) are often some of the best features for e.g., object detection and recognition problems.

Edge detection is a dimension reduction technique - by keeping only the edges of an image we get to throw away a lot of non-discriminating information. And typically the most useful kind of edge-detection is one that preserves only the important, global structures (ignoring local structures that aren't very discriminative). So removing local structures / retaining global structures is a crucial pre-processing step to performing edge detection in an image, and blurring can do just that.

Below is an animated gif showing the result of an edge-detected cat taken from Wikipedia, where the image is gradually blurred more and more prior to edge detection. When the animation begins you can't quite make out what it's a picture of, but as the animation evolves and local structures are removed via blurring the cat becomes visible in the edge-detected image.

Edge detection is a convolution performed on the image itself, and you can read about Canny edge detection on this OpenCV documentation page.

Canny edge detection

In the cell below we load in a test image, then apply Canny edge detection on it. The original image is shown on the left panel of the figure, while the edge-detected version of the image is shown on the right. Notice how the result looks very busy - there are too many little details preserved in the image before it is sent to the edge detector. When applied in computer vision applications, edge detection should preserve global structure; doing away with local structures that don't help describe what objects are in the image.

In [46]:
# Load in the image
image = cv2.imread('images/fawzia.jpg')

# Convert to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)  

# Perform Canny edge detection
edges = cv2.Canny(gray,100,200)

# Dilate the image to amplify edges
edges = cv2.dilate(edges, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges, cmap='gray')
Out[46]:
<matplotlib.image.AxesImage at 0xfe29e10>

Without first blurring the image, and removing small, local structures, a lot of irrelevant edge content gets picked up and amplified by the detector (as shown in the right panel above).

(IMPLEMENTATION) Blur the image then perform edge detection

In the next cell, you will repeat this experiment - blurring the image first to remove these local structures, so that only the important boudnary details remain in the edge-detected image.

Blur the image by using OpenCV's filter2d functionality - which is discussed in this documentation page - and use an averaging kernel of width equal to 4.

In [47]:
# Use an averaging kernel, and a kernel width equal to 4
kernel = np.ones((5,5),np.float32)/25
blur = cv2.filter2D(gray,-1,kernel)

edges = cv2.Canny(blur,100,200)

# Dilate the image to amplify edges
edges = cv2.dilate(edges, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges, cmap='gray')
Out[47]:
<matplotlib.image.AxesImage at 0xfef8f98>

Step 4: Automatically Hide the Identity of an Individual

If you film something like a documentary or reality TV, you must get permission from every individual shown on film before you can show their face, otherwise you need to blur it out - by blurring the face a lot (so much so that even the global structures are obscured)! This is also true for projects like Google's StreetView maps - an enormous collection of mapping images taken from a fleet of Google vehicles. Because it would be impossible for Google to get the permission of every single person accidentally captured in one of these images they blur out everyone's faces, the detected images must automatically blur the identity of detected people. Here's a few examples of folks caught in the camera of a Google street view vehicle.

Read in an image to perform identity detection

Let's try this out for ourselves. Use the face detection pipeline built above and what you know about using the filter2D to blur and image, and use these in tandem to hide the identity of the person in the following image - loaded in and printed in the next cell.

In [48]:
# Load in the image
image = cv2.imread('images/gus.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[48]:
<matplotlib.image.AxesImage at 0x1714d160>

(IMPLEMENTATION) Use blurring to hide the identity of an individual in an image

The idea here is to 1) automatically detect the face in this image, and then 2) blur it out! Make sure to adjust the parameters of the averaging blur filter to completely obscure this person's identity.

In [49]:
_, faces = detect_faces(image, 4, 5)
print('Number of faces detected:', len(faces))

image_with_blur = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    image_with_blur[y:y+h, x:x+h] = cv2.blur(image_with_blur[y:y+h, x:x+h],(200,200))
    

# Display the image with the detections
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Blurred Faces')
ax1.imshow(image_with_blur)
Number of faces detected: 1
Out[49]:
<matplotlib.image.AxesImage at 0x171a0518>

(Optional) Build identity protection into your laptop camera

In this optional task you can add identity protection to your laptop camera, using the previously completed code where you added face detection to your laptop camera - and the task above. You should be able to get reasonable results with little parameter tuning - like the one shown in the gif below.

As with the previous video task, to make this perfect would require significant effort - so don't strive for perfection here, strive for reasonable quality.

The next cell contains code a wrapper function called laptop_camera_identity_hider that - when called - will activate your laptop's camera. You need to place the relevant face detection and blurring code developed above in this function in order to blur faces entering your laptop camera's field of view.

Before adding anything to the function you can call it to get a hang of how it works - a small window will pop up showing you the live feed from your camera, you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [72]:
# Insert face detection and blurring code into the wrapper below to create an identity protector on your laptop!
import cv2
import time 

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep video stream open
    while rval:
        # Plot image from camera with detections marked
        _, faces = detect_faces(frame, 1.1, 3)
        image_with_blur = np.copy(frame)

        for (x,y,w,h) in faces:
            image_with_blur[y:y+h, x:x+h] = cv2.blur(image_with_blur[y:y+h, x:x+h],(200,200))
        cv2.imshow("face detection activated", image_with_blur)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key < 255: # Exit by pressing any key
            # Destroy windows
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [73]:
# Run laptop identity hider
laptop_camera_go()

Step 5: Create a CNN to Recognize Facial Keypoints

OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. In this stage of the project you will create your own end-to-end pipeline - employing convolutional networks in keras along with OpenCV - to apply a "selfie" filter to streaming video and images.

You will start by creating and then training a convolutional network that can detect facial keypoints in a small dataset of cropped images of human faces. We then guide you towards OpenCV to expanding your detection algorithm to more general images. What are facial keypoints? Let's take a look at some examples.

Facial keypoints (also called facial landmarks) are the small blue-green dots shown on each of the faces in the image above - there are 15 keypoints marked in each image. They mark important areas of the face - the eyes, corners of the mouth, the nose, etc. Facial keypoints can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat.

Below we illustrate a filter that, using the results of this section, automatically places sunglasses on people in images (using the facial keypoints to place the glasses correctly on each face). Here, the facial keypoints have been colored lime green for visualization purposes.

Make a facial keypoint detector

But first things first: how can we make a facial keypoint detector? Well, at a high level, notice that facial keypoint detection is a regression problem. A single face corresponds to a set of 15 facial keypoints (a set of 15 corresponding $(x, y)$ coordinates, i.e., an output point). Because our input data are images, we can employ a convolutional neural network to recognize patterns in our images and learn how to identify these keypoint given sets of labeled data.

In order to train a regressor, we need a training set - a set of facial image / facial keypoint pairs to train on. For this we will be using this dataset from Kaggle. We've already downloaded this data and placed it in the data directory. Make sure that you have both the training and test data files. The training dataset contains several thousand $96 \times 96$ grayscale images of cropped human faces, along with each face's 15 corresponding facial keypoints (also called landmarks) that have been placed by hand, and recorded in $(x, y)$ coordinates. This wonderful resource also has a substantial testing set, which we will use in tinkering with our convolutional network.

To load in this data, run the Python cell below - notice we will load in both the training and testing sets.

The load_data function is in the included utils.py file.

In [16]:
from utils import *

# Load training set
X_train, y_train = load_data()
print("X_train.shape == {}".format(X_train.shape))
print("y_train.shape == {}; y_train.min == {:.3f}; y_train.max == {:.3f}".format(
    y_train.shape, y_train.min(), y_train.max()))

# Load testing set
X_test, _ = load_data(test=True)
print("X_test.shape == {}".format(X_test.shape))
Using TensorFlow backend.
X_train.shape == (2140, 96, 96, 1)
y_train.shape == (2140, 30); y_train.min == -0.920; y_train.max == 0.996
X_test.shape == (1783, 96, 96, 1)

The load_data function in utils.py originates from this excellent blog post, which you are strongly encouraged to read. Please take the time now to review this function. Note how the output values - that is, the coordinates of each set of facial landmarks - have been normalized to take on values in the range $[-1, 1]$, while the pixel values of each input point (a facial image) have been normalized to the range $[0,1]$.

Note: the original Kaggle dataset contains some images with several missing keypoints. For simplicity, the load_data function removes those images with missing labels from the dataset. As an optional extension, you are welcome to amend the load_data function to include the incomplete data points.

Visualize the Training Data

Execute the code cell below to visualize a subset of the training data.

In [17]:
import matplotlib.pyplot as plt
%matplotlib inline

fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_train[i], y_train[i], ax)

For each training image, there are two landmarks per eyebrow (four total), three per eye (six total), four for the mouth, and one for the tip of the nose.

Review the plot_data function in utils.py to understand how the 30-dimensional training labels in y_train are mapped to facial locations, as this function will prove useful for your pipeline.

(ADDITIONAL) Data augmentation

To get a better result I'm going to augment training data I'll be flipping image horizontically, similar to Daniel Nouri blogpost

Let's take one image and try to flip it and it's facial points

In [18]:
import copy

img = copy.deepcopy(X_train[1])
pnt = copy.deepcopy(y_train[1][:])


def flip_hor(image, points):

    def flip(lst , pair):
        lst[pair[0]], lst[pair[1]] = lst[pair[1]], lst[pair[0]]

    img_flip = copy.deepcopy(image)
    img_flip = img_flip[:, ::-1, ::-1]

    pnt_flip = copy.deepcopy(points)
    pnt_flip[::2] = pnt_flip[::2] * -1

    flip_indices = [
        (0, 2), (1, 3), (4, 8),
        (5, 9), (6, 10), (7, 11),
        (12, 16), (13, 17), (14, 18),
        (15, 19), (22, 24), (23, 25),
        ]

    for i in flip_indices:
        flip(pnt_flip, i)
    
    img_flip = np.reshape(img_flip , (1, 96, 96, 1))
    pnt_flip = np.reshape(pnt_flip , (1, 30))
    
    return img_flip, pnt_flip

img_flip, pnt_flip = flip_hor(img, pnt)

fig = plt.figure(figsize=(9,9))
ax = fig.add_subplot(1, 2, 1, xticks=[], yticks=[])
plot_data(img, pnt, ax)
ax = fig.add_subplot(1, 2, 2, xticks=[], yticks=[])
plot_data(img_flip[0], pnt_flip[0], ax)

Let's now enlarge training set with augmented data

In [19]:
train_set_X = copy.deepcopy(X_train[:1700])
train_set_y = copy.deepcopy(y_train[:1700])

verify_set_X = copy.deepcopy(X_train[1700:])
verify_set_y = copy.deepcopy(y_train[1700:])


print(train_set_X.shape)
print(train_set_y.shape)

for i in range(len(train_set_X)):
    img, pnt = flip_hor(train_set_X[i], train_set_y[i])
    train_set_X = np.append(train_set_X, img, 0)
    train_set_y = np.append(train_set_y, pnt, 0)
    
print(train_set_X.shape)
print(train_set_y.shape)
(1700, 96, 96, 1)
(1700, 30)
(3400, 96, 96, 1)
(3400, 30)

(IMPLEMENTATION) Specify the CNN Architecture

In this section, you will specify a neural network for predicting the locations of facial keypoints. Use the code cell below to specify the architecture of your neural network. We have imported some layers that you may find useful for this task, but if you need to use more Keras layers, feel free to import them in the cell.

Your network should accept a $96 \times 96$ grayscale image as input, and it should output a vector with 30 entries, corresponding to the predicted (horizontal and vertical) locations of 15 facial keypoints. If you are not sure where to start, you can find some useful starting architectures in this blog, but you are not permitted to copy any of the architectures that you find online.

In [20]:
# Import deep learning resources from Keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Flatten, Dense, Dropout


def model_init():
    model = Sequential()
    model.add(Conv2D(filters=32, kernel_size=5, padding='same', 
                            input_shape=(96, 96, 1)))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.1))
    model.add(Conv2D(filters=64, kernel_size=3, padding='same'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.1))
    model.add(Conv2D(filters=128, kernel_size=2, padding='same'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.2))
    model.add(Conv2D(filters=192, kernel_size=2, padding='same'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.2))
    model.add(Conv2D(filters=256, kernel_size=2, padding='same'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.3))
    model.add(Flatten())
    model.add(Dense(512))
    model.add(Dropout(0.4))
    model.add(Dense(30))
    return model

m33 = model_init()


# Summarize the model
m33.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 96, 96, 32)        832       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 48, 48, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 48, 48, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 48, 48, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 24, 24, 64)        0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 24, 24, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 24, 24, 128)       32896     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 12, 12, 128)       0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 12, 12, 128)       0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 12, 12, 192)       98496     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 6, 6, 192)         0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 6, 6, 192)         0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 6, 6, 256)         196864    
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 3, 3, 256)         0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 3, 3, 256)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 2304)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               1180160   
_________________________________________________________________
dropout_6 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 30)                15390     
=================================================================
Total params: 1,543,134
Trainable params: 1,543,134
Non-trainable params: 0
_________________________________________________________________

Step 6: Compile and Train the Model

After specifying your architecture, you'll need to compile and train the model to detect facial keypoints'

(IMPLEMENTATION) Compile and Train the Model

Use the compile method to configure the learning process. Experiment with your choice of optimizer; you may have some ideas about which will work best (SGD vs. RMSprop, etc), but take the time to empirically verify your theories.

Use the fit method to train the model. Break off a validation set by setting validation_split=0.2. Save the returned History object in the history variable.

Your model is required to attain a validation loss (measured as mean squared error) of at least XYZ. When you have finished training, save your model as an HDF5 file with file path my_model.h5.

In [21]:
from keras.optimizers import SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam
from keras.callbacks import ModelCheckpoint, LearningRateScheduler 

epochs = 300

def schedule(n):
    lr = 0.001 + 0.003*(epochs-n)/epochs
    print("Learning rate = %s" % lr)
    return lr

m33.compile(optimizer="adamax", loss='mean_squared_error', metrics=['accuracy'])


checkpointer33 = ModelCheckpoint(filepath='m33.h5', 
                               verbose=1, save_best_only=True)

# Change learning rate dynamically
scheduler_lr = LearningRateScheduler(schedule)


h33 = m33.fit(train_set_X, train_set_y, 
          validation_data=(verify_set_X, verify_set_y),
          epochs=epochs, batch_size=23, callbacks=[checkpointer33, scheduler_lr], verbose=1)
Train on 3400 samples, validate on 440 samples
Learning rate = 0.004
Epoch 1/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0672 - acc: 0.3888Epoch 00000: val_loss improved from inf to 0.00567, saving model to m34.h5
3400/3400 [==============================] - 142s - loss: 0.0670 - acc: 0.3888 - val_loss: 0.0057 - val_acc: 0.6614
Learning rate = 0.0039925
Epoch 2/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0104 - acc: 0.5068Epoch 00001: val_loss improved from 0.00567 to 0.00486, saving model to m34.h5
3400/3400 [==============================] - 165s - loss: 0.0104 - acc: 0.5065 - val_loss: 0.0049 - val_acc: 0.6977
Learning rate = 0.003985
Epoch 3/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0086 - acc: 0.5628Epoch 00002: val_loss improved from 0.00486 to 0.00458, saving model to m34.h5
3400/3400 [==============================] - 187s - loss: 0.0086 - acc: 0.5626 - val_loss: 0.0046 - val_acc: 0.6977
Learning rate = 0.0039775
Epoch 4/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0072 - acc: 0.5885Epoch 00003: val_loss improved from 0.00458 to 0.00424, saving model to m34.h5
3400/3400 [==============================] - 189s - loss: 0.0072 - acc: 0.5876 - val_loss: 0.0042 - val_acc: 0.7045
Learning rate = 0.00397
Epoch 5/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0063 - acc: 0.6209Epoch 00004: val_loss improved from 0.00424 to 0.00369, saving model to m34.h5
3400/3400 [==============================] - 182s - loss: 0.0063 - acc: 0.6209 - val_loss: 0.0037 - val_acc: 0.7136
Learning rate = 0.003962500000000001
Epoch 6/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0054 - acc: 0.6375Epoch 00005: val_loss improved from 0.00369 to 0.00316, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0054 - acc: 0.6371 - val_loss: 0.0032 - val_acc: 0.7205
Learning rate = 0.003955
Epoch 7/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0045 - acc: 0.6549Epoch 00006: val_loss improved from 0.00316 to 0.00282, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0045 - acc: 0.6556 - val_loss: 0.0028 - val_acc: 0.7455
Learning rate = 0.0039475
Epoch 8/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0039 - acc: 0.6844Epoch 00007: val_loss improved from 0.00282 to 0.00222, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 0.0039 - acc: 0.6847 - val_loss: 0.0022 - val_acc: 0.7591
Learning rate = 0.00394
Epoch 9/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0035 - acc: 0.6932Epoch 00008: val_loss improved from 0.00222 to 0.00209, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0035 - acc: 0.6932 - val_loss: 0.0021 - val_acc: 0.7591
Learning rate = 0.0039325
Epoch 10/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0031 - acc: 0.7165Epoch 00009: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0031 - acc: 0.7168 - val_loss: 0.0022 - val_acc: 0.7773
Learning rate = 0.003925
Epoch 11/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0028 - acc: 0.7236Epoch 00010: val_loss improved from 0.00209 to 0.00182, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0028 - acc: 0.7229 - val_loss: 0.0018 - val_acc: 0.7886
Learning rate = 0.0039175
Epoch 12/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0026 - acc: 0.7286Epoch 00011: val_loss improved from 0.00182 to 0.00178, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0026 - acc: 0.7285 - val_loss: 0.0018 - val_acc: 0.7955
Learning rate = 0.00391
Epoch 13/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0024 - acc: 0.7254Epoch 00012: val_loss improved from 0.00178 to 0.00160, saving model to m34.h5
3400/3400 [==============================] - 176s - loss: 0.0024 - acc: 0.7250 - val_loss: 0.0016 - val_acc: 0.7727
Learning rate = 0.0039025
Epoch 14/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0023 - acc: 0.7322Epoch 00013: val_loss improved from 0.00160 to 0.00155, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0023 - acc: 0.7318 - val_loss: 0.0015 - val_acc: 0.7932
Learning rate = 0.003895
Epoch 15/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0023 - acc: 0.7319Epoch 00014: val_loss improved from 0.00155 to 0.00151, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0023 - acc: 0.7318 - val_loss: 0.0015 - val_acc: 0.7795
Learning rate = 0.0038875
Epoch 16/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0022 - acc: 0.7286Epoch 00015: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0022 - acc: 0.7288 - val_loss: 0.0016 - val_acc: 0.8045
Learning rate = 0.00388
Epoch 17/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0021 - acc: 0.7254Epoch 00016: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 0.0021 - acc: 0.7259 - val_loss: 0.0017 - val_acc: 0.7500
Learning rate = 0.0038725
Epoch 18/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0022 - acc: 0.7289Epoch 00017: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 0.0022 - acc: 0.7294 - val_loss: 0.0019 - val_acc: 0.8000
Learning rate = 0.0038650000000000004
Epoch 19/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0021 - acc: 0.7304Epoch 00018: val_loss improved from 0.00151 to 0.00146, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 0.0021 - acc: 0.7288 - val_loss: 0.0015 - val_acc: 0.7727
Learning rate = 0.0038575000000000003
Epoch 20/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0020 - acc: 0.7319Epoch 00019: val_loss did not improve
3400/3400 [==============================] - 181s - loss: 0.0020 - acc: 0.7312 - val_loss: 0.0015 - val_acc: 0.8000
Learning rate = 0.00385
Epoch 21/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7212Epoch 00020: val_loss improved from 0.00146 to 0.00142, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0019 - acc: 0.7215 - val_loss: 0.0014 - val_acc: 0.7909
Learning rate = 0.0038425
Epoch 22/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7507Epoch 00021: val_loss improved from 0.00142 to 0.00138, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7509 - val_loss: 0.0014 - val_acc: 0.7932
Learning rate = 0.0038350000000000003
Epoch 23/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7501Epoch 00022: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7494 - val_loss: 0.0015 - val_acc: 0.7705
Learning rate = 0.0038275
Epoch 24/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7546Epoch 00023: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7550 - val_loss: 0.0015 - val_acc: 0.7841
Learning rate = 0.0038200000000000005
Epoch 25/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7504Epoch 00024: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7503 - val_loss: 0.0015 - val_acc: 0.7795
Learning rate = 0.0038125
Epoch 26/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7543Epoch 00025: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7529 - val_loss: 0.0015 - val_acc: 0.7682
Learning rate = 0.0038050000000000002
Epoch 27/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7472Epoch 00026: val_loss improved from 0.00138 to 0.00134, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7479 - val_loss: 0.0013 - val_acc: 0.7818
Learning rate = 0.0037975
Epoch 28/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7593Epoch 00027: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7597 - val_loss: 0.0014 - val_acc: 0.8045
Learning rate = 0.0037900000000000004
Epoch 29/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7549Epoch 00028: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7544 - val_loss: 0.0015 - val_acc: 0.7568
Learning rate = 0.0037825
Epoch 30/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7466Epoch 00029: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7471 - val_loss: 0.0018 - val_acc: 0.7955
Learning rate = 0.003775
Epoch 31/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7575Epoch 00030: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7571 - val_loss: 0.0014 - val_acc: 0.7682
Learning rate = 0.0037675
Epoch 32/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7493Epoch 00031: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7494 - val_loss: 0.0017 - val_acc: 0.7727
Learning rate = 0.0037600000000000003
Epoch 33/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7484Epoch 00032: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7482 - val_loss: 0.0015 - val_acc: 0.8000
Learning rate = 0.0037524999999999998
Epoch 34/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7572Epoch 00033: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7568 - val_loss: 0.0014 - val_acc: 0.7977
Learning rate = 0.003745
Epoch 35/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7445Epoch 00034: val_loss improved from 0.00134 to 0.00126, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0019 - acc: 0.7450 - val_loss: 0.0013 - val_acc: 0.7886
Learning rate = 0.0037375
Epoch 36/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7640Epoch 00035: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7644 - val_loss: 0.0016 - val_acc: 0.8000
Learning rate = 0.0037300000000000002
Epoch 37/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7560Epoch 00036: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7565 - val_loss: 0.0014 - val_acc: 0.7955
Learning rate = 0.0037225
Epoch 38/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7510Epoch 00037: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7509 - val_loss: 0.0013 - val_acc: 0.8205
Learning rate = 0.0037150000000000004
Epoch 39/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7560Epoch 00038: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7559 - val_loss: 0.0016 - val_acc: 0.7455
Learning rate = 0.0037075
Epoch 40/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7540Epoch 00039: val_loss did not improve
3400/3400 [==============================] - 179s - loss: 0.0019 - acc: 0.7544 - val_loss: 0.0013 - val_acc: 0.8000
Learning rate = 0.0037
Epoch 41/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7587Epoch 00040: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7585 - val_loss: 0.0013 - val_acc: 0.7773
Learning rate = 0.0036925
Epoch 42/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7475Epoch 00041: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7474 - val_loss: 0.0014 - val_acc: 0.7818
Learning rate = 0.0036850000000000003
Epoch 43/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7440Epoch 00042: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7441 - val_loss: 0.0014 - val_acc: 0.7773
Learning rate = 0.0036774999999999998
Epoch 44/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7484Epoch 00043: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7482 - val_loss: 0.0014 - val_acc: 0.7864
Learning rate = 0.00367
Epoch 45/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7593Epoch 00044: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7591 - val_loss: 0.0015 - val_acc: 0.8045
Learning rate = 0.0036625
Epoch 46/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7519Epoch 00045: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7515 - val_loss: 0.0014 - val_acc: 0.7864
Learning rate = 0.0036550000000000003
Epoch 47/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7499Epoch 00046: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7500 - val_loss: 0.0018 - val_acc: 0.7364
Learning rate = 0.0036474999999999997
Epoch 48/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7490Epoch 00047: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7488 - val_loss: 0.0015 - val_acc: 0.7909
Learning rate = 0.00364
Epoch 49/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7560Epoch 00048: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7556 - val_loss: 0.0014 - val_acc: 0.7750
Learning rate = 0.0036325
Epoch 50/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7558Epoch 00049: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0019 - acc: 0.7553 - val_loss: 0.0014 - val_acc: 0.7955
Learning rate = 0.003625
Epoch 51/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7493Epoch 00050: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7491 - val_loss: 0.0014 - val_acc: 0.7750
Learning rate = 0.0036175
Epoch 52/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7516Epoch 00051: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7515 - val_loss: 0.0015 - val_acc: 0.7864
Learning rate = 0.00361
Epoch 53/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7499Epoch 00052: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7500 - val_loss: 0.0014 - val_acc: 0.8023
Learning rate = 0.0036025
Epoch 54/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7566Epoch 00053: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7568 - val_loss: 0.0014 - val_acc: 0.7795
Learning rate = 0.003595
Epoch 55/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7587Epoch 00054: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7588 - val_loss: 0.0014 - val_acc: 0.8000
Learning rate = 0.0035875
Epoch 56/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7563Epoch 00055: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7565 - val_loss: 0.0014 - val_acc: 0.7977
Learning rate = 0.0035800000000000003
Epoch 57/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7493Epoch 00056: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7491 - val_loss: 0.0014 - val_acc: 0.7886
Learning rate = 0.0035724999999999997
Epoch 58/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7584Epoch 00057: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7591 - val_loss: 0.0013 - val_acc: 0.7523
Learning rate = 0.003565
Epoch 59/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7555Epoch 00058: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7553 - val_loss: 0.0017 - val_acc: 0.7977
Learning rate = 0.0035575000000000003
Epoch 60/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7507Epoch 00059: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7509 - val_loss: 0.0015 - val_acc: 0.7955
Learning rate = 0.00355
Epoch 61/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7478Epoch 00060: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 0.0019 - acc: 0.7474 - val_loss: 0.0016 - val_acc: 0.8023
Learning rate = 0.0035425000000000005
Epoch 62/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7560Epoch 00061: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7562 - val_loss: 0.0016 - val_acc: 0.7750
Learning rate = 0.003535
Epoch 63/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7431Epoch 00062: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7426 - val_loss: 0.0014 - val_acc: 0.7818
Learning rate = 0.0035275000000000003
Epoch 64/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7499Epoch 00063: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7491 - val_loss: 0.0015 - val_acc: 0.7932
Learning rate = 0.00352
Epoch 65/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7440Epoch 00064: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7441 - val_loss: 0.0018 - val_acc: 0.7795
Learning rate = 0.0035125000000000004
Epoch 66/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7599Epoch 00065: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7603 - val_loss: 0.0013 - val_acc: 0.7955
Learning rate = 0.003505
Epoch 67/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7428Epoch 00066: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7424 - val_loss: 0.0019 - val_acc: 0.7864
Learning rate = 0.0034975
Epoch 68/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7528Epoch 00067: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7529 - val_loss: 0.0014 - val_acc: 0.8023
Learning rate = 0.00349
Epoch 69/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7584Epoch 00068: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7582 - val_loss: 0.0015 - val_acc: 0.8091
Learning rate = 0.0034825
Epoch 70/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7434Epoch 00069: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7429 - val_loss: 0.0016 - val_acc: 0.7886
Learning rate = 0.003475
Epoch 71/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7552Epoch 00070: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7547 - val_loss: 0.0015 - val_acc: 0.7636
Learning rate = 0.0034675
Epoch 72/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7634Epoch 00071: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7624 - val_loss: 0.0014 - val_acc: 0.7841
Learning rate = 0.00346
Epoch 73/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7507Epoch 00072: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7506 - val_loss: 0.0013 - val_acc: 0.7886
Learning rate = 0.0034525
Epoch 74/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7619Epoch 00073: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7618 - val_loss: 0.0015 - val_acc: 0.7636
Learning rate = 0.003445
Epoch 75/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7531Epoch 00074: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7532 - val_loss: 0.0015 - val_acc: 0.8000
Learning rate = 0.0034375
Epoch 76/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7478Epoch 00075: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0019 - acc: 0.7479 - val_loss: 0.0014 - val_acc: 0.8318
Learning rate = 0.00343
Epoch 77/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7537Epoch 00076: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7544 - val_loss: 0.0013 - val_acc: 0.7932
Learning rate = 0.0034224999999999998
Epoch 78/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7558Epoch 00077: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7562 - val_loss: 0.0014 - val_acc: 0.7909
Learning rate = 0.003415
Epoch 79/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7469Epoch 00078: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0019 - acc: 0.7471 - val_loss: 0.0013 - val_acc: 0.7727
Learning rate = 0.0034075
Epoch 80/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7560Epoch 00079: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0019 - acc: 0.7562 - val_loss: 0.0013 - val_acc: 0.7864
Learning rate = 0.0034
Epoch 81/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7398Epoch 00080: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 0.0019 - acc: 0.7400 - val_loss: 0.0014 - val_acc: 0.8023
Learning rate = 0.0033925
Epoch 82/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7578Epoch 00081: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0018 - acc: 0.7576 - val_loss: 0.0014 - val_acc: 0.7682
Learning rate = 0.003385
Epoch 83/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7493Epoch 00082: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7494 - val_loss: 0.0013 - val_acc: 0.8068
Learning rate = 0.0033775000000000003
Epoch 84/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7513Epoch 00083: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7512 - val_loss: 0.0015 - val_acc: 0.7864
Learning rate = 0.00337
Epoch 85/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7451Epoch 00084: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0019 - acc: 0.7453 - val_loss: 0.0013 - val_acc: 0.8091
Learning rate = 0.0033625
Epoch 86/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7637Epoch 00085: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7644 - val_loss: 0.0015 - val_acc: 0.7795
Learning rate = 0.0033550000000000003
Epoch 87/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7543Epoch 00086: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7547 - val_loss: 0.0014 - val_acc: 0.7841
Learning rate = 0.0033475
Epoch 88/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7560Epoch 00087: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7565 - val_loss: 0.0013 - val_acc: 0.7864
Learning rate = 0.00334
Epoch 89/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0019 - acc: 0.7625Epoch 00088: val_loss did not improve
3400/3400 [==============================] - 172s - loss: 0.0019 - acc: 0.7624 - val_loss: 0.0014 - val_acc: 0.7841
Learning rate = 0.0033325
Epoch 90/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7525Epoch 00089: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7526 - val_loss: 0.0013 - val_acc: 0.7886
Learning rate = 0.0033250000000000003
Epoch 91/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7602Epoch 00090: val_loss did not improve
3400/3400 [==============================] - 172s - loss: 0.0018 - acc: 0.7603 - val_loss: 0.0013 - val_acc: 0.8091
Learning rate = 0.0033175
Epoch 92/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7463Epoch 00091: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7462 - val_loss: 0.0016 - val_acc: 0.7841
Learning rate = 0.00331
Epoch 93/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7631Epoch 00092: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7624 - val_loss: 0.0014 - val_acc: 0.8000
Learning rate = 0.0033025000000000003
Epoch 94/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7528Epoch 00093: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7535 - val_loss: 0.0013 - val_acc: 0.7955
Learning rate = 0.003295
Epoch 95/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7496Epoch 00094: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7494 - val_loss: 0.0014 - val_acc: 0.7614
Learning rate = 0.0032875
Epoch 96/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7611Epoch 00095: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7612 - val_loss: 0.0014 - val_acc: 0.7841
Learning rate = 0.00328
Epoch 97/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7422Epoch 00096: val_loss improved from 0.00126 to 0.00123, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7418 - val_loss: 0.0012 - val_acc: 0.8045
Learning rate = 0.0032725000000000002
Epoch 98/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7578Epoch 00097: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7574 - val_loss: 0.0014 - val_acc: 0.7864
Learning rate = 0.003265
Epoch 99/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7519Epoch 00098: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7518 - val_loss: 0.0014 - val_acc: 0.8023
Learning rate = 0.0032575
Epoch 100/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7566Epoch 00099: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7571 - val_loss: 0.0015 - val_acc: 0.7727
Learning rate = 0.0032500000000000003
Epoch 101/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7602Epoch 00100: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7597 - val_loss: 0.0014 - val_acc: 0.7955
Learning rate = 0.0032425
Epoch 102/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7487Epoch 00101: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 0.0018 - acc: 0.7488 - val_loss: 0.0013 - val_acc: 0.7909
Learning rate = 0.003235
Epoch 103/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7516Epoch 00102: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7518 - val_loss: 0.0013 - val_acc: 0.8045
Learning rate = 0.0032275
Epoch 104/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7617Epoch 00103: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7618 - val_loss: 0.0012 - val_acc: 0.7818
Learning rate = 0.00322
Epoch 105/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7504Epoch 00104: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0018 - acc: 0.7503 - val_loss: 0.0013 - val_acc: 0.7295
Learning rate = 0.0032125
Epoch 106/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7590Epoch 00105: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7585 - val_loss: 0.0013 - val_acc: 0.7955
Learning rate = 0.003205
Epoch 107/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7587Epoch 00106: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7588 - val_loss: 0.0012 - val_acc: 0.8136
Learning rate = 0.0031975
Epoch 108/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0018 - acc: 0.7501Epoch 00107: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0018 - acc: 0.7500 - val_loss: 0.0013 - val_acc: 0.7795
Learning rate = 0.00319
Epoch 109/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7614Epoch 00108: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7618 - val_loss: 0.0013 - val_acc: 0.8000
Learning rate = 0.0031825
Epoch 110/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7563Epoch 00109: val_loss improved from 0.00123 to 0.00122, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7565 - val_loss: 0.0012 - val_acc: 0.7864
Learning rate = 0.003175
Epoch 111/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7525Epoch 00110: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7529 - val_loss: 0.0012 - val_acc: 0.7909
Learning rate = 0.0031675
Epoch 112/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7608Epoch 00111: val_loss did not improve
3400/3400 [==============================] - 179s - loss: 0.0017 - acc: 0.7606 - val_loss: 0.0013 - val_acc: 0.7727
Learning rate = 0.00316
Epoch 113/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7608Epoch 00112: val_loss improved from 0.00122 to 0.00120, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7606 - val_loss: 0.0012 - val_acc: 0.7841
Learning rate = 0.0031525
Epoch 114/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7673Epoch 00113: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7674 - val_loss: 0.0013 - val_acc: 0.7386
Learning rate = 0.003145
Epoch 115/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7619Epoch 00114: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7624 - val_loss: 0.0014 - val_acc: 0.8114
Learning rate = 0.0031375
Epoch 116/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7673Epoch 00115: val_loss did not improve
3400/3400 [==============================] - 172s - loss: 0.0017 - acc: 0.7674 - val_loss: 0.0014 - val_acc: 0.7727
Learning rate = 0.00313
Epoch 117/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7566Epoch 00116: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7571 - val_loss: 0.0014 - val_acc: 0.8068
Learning rate = 0.0031225
Epoch 118/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7617Epoch 00117: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7618 - val_loss: 0.0014 - val_acc: 0.7750
Learning rate = 0.003115
Epoch 119/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7743Epoch 00118: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7738 - val_loss: 0.0012 - val_acc: 0.8045
Learning rate = 0.0031075
Epoch 120/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7678Epoch 00119: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7679 - val_loss: 0.0013 - val_acc: 0.8000
Learning rate = 0.0031
Epoch 121/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7546Epoch 00120: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7550 - val_loss: 0.0012 - val_acc: 0.7886
Learning rate = 0.0030924999999999998
Epoch 122/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7619Epoch 00121: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7626 - val_loss: 0.0013 - val_acc: 0.8023
Learning rate = 0.003085
Epoch 123/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7605Epoch 00122: val_loss did not improve
3400/3400 [==============================] - 179s - loss: 0.0017 - acc: 0.7600 - val_loss: 0.0013 - val_acc: 0.7523
Learning rate = 0.0030775000000000004
Epoch 124/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7602Epoch 00123: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0016 - acc: 0.7603 - val_loss: 0.0013 - val_acc: 0.8023
Learning rate = 0.0030700000000000002
Epoch 125/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7649Epoch 00124: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7644 - val_loss: 0.0013 - val_acc: 0.8000
Learning rate = 0.0030625
Epoch 126/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7543Epoch 00125: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7544 - val_loss: 0.0013 - val_acc: 0.7682
Learning rate = 0.003055
Epoch 127/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7625Epoch 00126: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7629 - val_loss: 0.0013 - val_acc: 0.8182
Learning rate = 0.0030475000000000003
Epoch 128/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7611Epoch 00127: val_loss improved from 0.00120 to 0.00116, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0017 - acc: 0.7609 - val_loss: 0.0012 - val_acc: 0.8068
Learning rate = 0.00304
Epoch 129/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7699Epoch 00128: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0017 - acc: 0.7700 - val_loss: 0.0013 - val_acc: 0.7932
Learning rate = 0.0030325
Epoch 130/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7590Epoch 00129: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0016 - acc: 0.7585 - val_loss: 0.0012 - val_acc: 0.7955
Learning rate = 0.0030250000000000003
Epoch 131/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7578Epoch 00130: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0016 - acc: 0.7579 - val_loss: 0.0013 - val_acc: 0.7886
Learning rate = 0.0030175
Epoch 132/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7634Epoch 00131: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0016 - acc: 0.7632 - val_loss: 0.0012 - val_acc: 0.7818
Learning rate = 0.00301
Epoch 133/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0017 - acc: 0.7693Epoch 00132: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0017 - acc: 0.7697 - val_loss: 0.0014 - val_acc: 0.8159
Learning rate = 0.0030025
Epoch 134/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7752Epoch 00133: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0016 - acc: 0.7753 - val_loss: 0.0012 - val_acc: 0.8205
Learning rate = 0.0029950000000000003
Epoch 135/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7513Epoch 00134: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0016 - acc: 0.7518 - val_loss: 0.0013 - val_acc: 0.8068
Learning rate = 0.0029875
Epoch 136/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7676Epoch 00135: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0016 - acc: 0.7674 - val_loss: 0.0012 - val_acc: 0.8182
Learning rate = 0.00298
Epoch 137/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7614Epoch 00136: val_loss improved from 0.00116 to 0.00115, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0016 - acc: 0.7615 - val_loss: 0.0011 - val_acc: 0.8023
Learning rate = 0.0029725000000000003
Epoch 138/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7602Epoch 00137: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0016 - acc: 0.7606 - val_loss: 0.0012 - val_acc: 0.8068
Learning rate = 0.002965
Epoch 139/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7746Epoch 00138: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0016 - acc: 0.7747 - val_loss: 0.0012 - val_acc: 0.8045
Learning rate = 0.0029575
Epoch 140/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7593Epoch 00139: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0016 - acc: 0.7597 - val_loss: 0.0012 - val_acc: 0.8227
Learning rate = 0.0029500000000000004
Epoch 141/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7678Epoch 00140: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0016 - acc: 0.7676 - val_loss: 0.0012 - val_acc: 0.8091
Learning rate = 0.0029425
Epoch 142/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7720Epoch 00141: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0015 - acc: 0.7712 - val_loss: 0.0012 - val_acc: 0.7818
Learning rate = 0.002935
Epoch 143/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7649Epoch 00142: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 0.0016 - acc: 0.7647 - val_loss: 0.0012 - val_acc: 0.8091
Learning rate = 0.0029275
Epoch 144/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7670Epoch 00143: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0016 - acc: 0.7671 - val_loss: 0.0012 - val_acc: 0.7977
Learning rate = 0.00292
Epoch 145/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7678Epoch 00144: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0016 - acc: 0.7676 - val_loss: 0.0012 - val_acc: 0.8159
Learning rate = 0.0029125
Epoch 146/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7661Epoch 00145: val_loss improved from 0.00115 to 0.00113, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7662 - val_loss: 0.0011 - val_acc: 0.8091
Learning rate = 0.002905
Epoch 147/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7735Epoch 00146: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7726 - val_loss: 0.0012 - val_acc: 0.7886
Learning rate = 0.0028975
Epoch 148/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0016 - acc: 0.7711Epoch 00147: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0016 - acc: 0.7709 - val_loss: 0.0012 - val_acc: 0.7773
Learning rate = 0.00289
Epoch 149/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7779Epoch 00148: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7774 - val_loss: 0.0012 - val_acc: 0.8136
Learning rate = 0.0028825
Epoch 150/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7625Epoch 00149: val_loss improved from 0.00113 to 0.00111, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7621 - val_loss: 0.0011 - val_acc: 0.7909
Learning rate = 0.002875
Epoch 151/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7761Epoch 00150: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7753 - val_loss: 0.0011 - val_acc: 0.8205
Learning rate = 0.0028675000000000003
Epoch 152/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7646Epoch 00151: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7653 - val_loss: 0.0014 - val_acc: 0.8000
Learning rate = 0.0028599999999999997
Epoch 153/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7661Epoch 00152: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7662 - val_loss: 0.0012 - val_acc: 0.7977
Learning rate = 0.0028525
Epoch 154/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7649Epoch 00153: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0015 - acc: 0.7650 - val_loss: 0.0011 - val_acc: 0.7909
Learning rate = 0.0028450000000000003
Epoch 155/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7720Epoch 00154: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7721 - val_loss: 0.0011 - val_acc: 0.8182
Learning rate = 0.0028374999999999997
Epoch 156/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7711Epoch 00155: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7709 - val_loss: 0.0011 - val_acc: 0.7841
Learning rate = 0.00283
Epoch 157/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7726Epoch 00156: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7729 - val_loss: 0.0011 - val_acc: 0.8023
Learning rate = 0.0028225
Epoch 158/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7776Epoch 00157: val_loss improved from 0.00111 to 0.00111, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 0.0015 - acc: 0.7774 - val_loss: 0.0011 - val_acc: 0.8250
Learning rate = 0.002815
Epoch 159/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7805Epoch 00158: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0015 - acc: 0.7803 - val_loss: 0.0013 - val_acc: 0.8114
Learning rate = 0.0028075
Epoch 160/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7693Epoch 00159: val_loss improved from 0.00111 to 0.00109, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0015 - acc: 0.7697 - val_loss: 0.0011 - val_acc: 0.8045
Learning rate = 0.0028
Epoch 161/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7811Epoch 00160: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0014 - acc: 0.7812 - val_loss: 0.0012 - val_acc: 0.8091
Learning rate = 0.0027925
Epoch 162/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7791Epoch 00161: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0015 - acc: 0.7797 - val_loss: 0.0012 - val_acc: 0.8136
Learning rate = 0.002785
Epoch 163/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7723Epoch 00162: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 0.0015 - acc: 0.7718 - val_loss: 0.0011 - val_acc: 0.8000
Learning rate = 0.0027775
Epoch 164/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7743Epoch 00163: val_loss did not improve
3400/3400 [==============================] - 185s - loss: 0.0014 - acc: 0.7741 - val_loss: 0.0011 - val_acc: 0.7955
Learning rate = 0.00277
Epoch 165/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7732Epoch 00164: val_loss did not improve
3400/3400 [==============================] - 179s - loss: 0.0015 - acc: 0.7735 - val_loss: 0.0011 - val_acc: 0.7955
Learning rate = 0.0027625
Epoch 166/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7708Epoch 00165: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 0.0015 - acc: 0.7700 - val_loss: 0.0011 - val_acc: 0.8386
Learning rate = 0.0027550000000000005
Epoch 167/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7732Epoch 00166: val_loss did not improve
3400/3400 [==============================] - 179s - loss: 0.0014 - acc: 0.7735 - val_loss: 0.0012 - val_acc: 0.8114
Learning rate = 0.0027475
Epoch 168/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0015 - acc: 0.7664Epoch 00167: val_loss improved from 0.00109 to 0.00104, saving model to m34.h5
3400/3400 [==============================] - 180s - loss: 0.0015 - acc: 0.7662 - val_loss: 0.0010 - val_acc: 0.8159
Learning rate = 0.0027400000000000002
Epoch 169/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7717Epoch 00168: val_loss did not improve
3400/3400 [==============================] - 179s - loss: 0.0014 - acc: 0.7721 - val_loss: 0.0012 - val_acc: 0.7886
Learning rate = 0.0027325
Epoch 170/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7838Epoch 00169: val_loss did not improve
3400/3400 [==============================] - 186s - loss: 0.0014 - acc: 0.7841 - val_loss: 0.0011 - val_acc: 0.8000
Learning rate = 0.002725
Epoch 171/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7729Epoch 00170: val_loss did not improve
3400/3400 [==============================] - 194s - loss: 0.0014 - acc: 0.7732 - val_loss: 0.0012 - val_acc: 0.7932
Learning rate = 0.0027175000000000003
Epoch 172/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7770Epoch 00171: val_loss did not improve
3400/3400 [==============================] - 193s - loss: 0.0014 - acc: 0.7771 - val_loss: 0.0011 - val_acc: 0.7795
Learning rate = 0.00271
Epoch 173/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7799Epoch 00172: val_loss improved from 0.00104 to 0.00104, saving model to m34.h5
3400/3400 [==============================] - 177s - loss: 0.0014 - acc: 0.7797 - val_loss: 0.0010 - val_acc: 0.8023
Learning rate = 0.0027025
Epoch 174/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7853Epoch 00173: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0014 - acc: 0.7850 - val_loss: 0.0011 - val_acc: 0.7955
Learning rate = 0.0026950000000000003
Epoch 175/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7664Epoch 00174: val_loss improved from 0.00104 to 0.00101, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 0.0014 - acc: 0.7668 - val_loss: 0.0010 - val_acc: 0.8091
Learning rate = 0.0026875000000000002
Epoch 176/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7746Epoch 00175: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0014 - acc: 0.7747 - val_loss: 0.0011 - val_acc: 0.8091
Learning rate = 0.00268
Epoch 177/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7767Epoch 00176: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0014 - acc: 0.7771 - val_loss: 0.0012 - val_acc: 0.7886
Learning rate = 0.0026725000000000004
Epoch 178/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7735Epoch 00177: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0014 - acc: 0.7741 - val_loss: 0.0011 - val_acc: 0.8000
Learning rate = 0.002665
Epoch 179/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7764Epoch 00178: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0014 - acc: 0.7759 - val_loss: 0.0011 - val_acc: 0.7977
Learning rate = 0.0026575
Epoch 180/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7761Epoch 00179: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0014 - acc: 0.7762 - val_loss: 0.0012 - val_acc: 0.8000
Learning rate = 0.00265
Epoch 181/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7861Epoch 00180: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0014 - acc: 0.7865 - val_loss: 0.0011 - val_acc: 0.7977
Learning rate = 0.0026425
Epoch 182/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7696Epoch 00181: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0014 - acc: 0.7697 - val_loss: 0.0011 - val_acc: 0.7886
Learning rate = 0.002635
Epoch 183/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0014 - acc: 0.7743Epoch 00182: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0014 - acc: 0.7744 - val_loss: 0.0011 - val_acc: 0.8136
Learning rate = 0.0026275
Epoch 184/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7696Epoch 00183: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 0.0013 - acc: 0.7700 - val_loss: 0.0011 - val_acc: 0.7909
Learning rate = 0.00262
Epoch 185/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7767Epoch 00184: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0013 - acc: 0.7771 - val_loss: 0.0010 - val_acc: 0.8114
Learning rate = 0.0026125000000000002
Epoch 186/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7850Epoch 00185: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0013 - acc: 0.7853 - val_loss: 0.0011 - val_acc: 0.7818
Learning rate = 0.002605
Epoch 187/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7826Epoch 00186: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0013 - acc: 0.7824 - val_loss: 0.0011 - val_acc: 0.8091
Learning rate = 0.0025975
Epoch 188/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7909Epoch 00187: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0013 - acc: 0.7909 - val_loss: 0.0012 - val_acc: 0.8159
Learning rate = 0.0025900000000000003
Epoch 189/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7835Epoch 00188: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0013 - acc: 0.7824 - val_loss: 0.0012 - val_acc: 0.8023
Learning rate = 0.0025824999999999997
Epoch 190/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7770Epoch 00189: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0013 - acc: 0.7774 - val_loss: 0.0011 - val_acc: 0.8000
Learning rate = 0.002575
Epoch 191/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7873Epoch 00190: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0013 - acc: 0.7868 - val_loss: 0.0011 - val_acc: 0.7864
Learning rate = 0.0025675000000000003
Epoch 192/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7888Epoch 00191: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0013 - acc: 0.7891 - val_loss: 0.0011 - val_acc: 0.7795
Learning rate = 0.0025599999999999998
Epoch 193/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7912Epoch 00192: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0013 - acc: 0.7912 - val_loss: 0.0011 - val_acc: 0.8023
Learning rate = 0.0025525
Epoch 194/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7678Epoch 00193: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0013 - acc: 0.7679 - val_loss: 0.0010 - val_acc: 0.8386
Learning rate = 0.002545
Epoch 195/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7732Epoch 00194: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0013 - acc: 0.7732 - val_loss: 0.0011 - val_acc: 0.8068
Learning rate = 0.0025375
Epoch 196/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7906Epoch 00195: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0013 - acc: 0.7900 - val_loss: 0.0011 - val_acc: 0.7886
Learning rate = 0.00253
Epoch 197/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7950Epoch 00196: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0013 - acc: 0.7947 - val_loss: 0.0011 - val_acc: 0.8045
Learning rate = 0.0025225
Epoch 198/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7864Epoch 00197: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0013 - acc: 0.7862 - val_loss: 0.0010 - val_acc: 0.8159
Learning rate = 0.002515
Epoch 199/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7850Epoch 00198: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0013 - acc: 0.7850 - val_loss: 0.0011 - val_acc: 0.8159
Learning rate = 0.0025075
Epoch 200/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7882Epoch 00199: val_loss improved from 0.00101 to 0.00097, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0013 - acc: 0.7885 - val_loss: 9.7364e-04 - val_acc: 0.8227
Learning rate = 0.0025
Epoch 201/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7776Epoch 00200: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0013 - acc: 0.7771 - val_loss: 0.0011 - val_acc: 0.8182
Learning rate = 0.0024925
Epoch 202/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7950Epoch 00201: val_loss did not improve
3400/3400 [==============================] - 172s - loss: 0.0013 - acc: 0.7950 - val_loss: 0.0010 - val_acc: 0.8114
Learning rate = 0.0024850000000000002
Epoch 203/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7791Epoch 00202: val_loss improved from 0.00097 to 0.00094, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 0.0012 - acc: 0.7794 - val_loss: 9.4241e-04 - val_acc: 0.8045
Learning rate = 0.0024774999999999997
Epoch 204/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7814Epoch 00203: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0012 - acc: 0.7818 - val_loss: 0.0011 - val_acc: 0.7955
Learning rate = 0.00247
Epoch 205/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0013 - acc: 0.7817Epoch 00204: val_loss did not improve
3400/3400 [==============================] - 182s - loss: 0.0013 - acc: 0.7809 - val_loss: 9.8074e-04 - val_acc: 0.7886
Learning rate = 0.0024625
Epoch 206/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7885Epoch 00205: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7882 - val_loss: 0.0011 - val_acc: 0.8045
Learning rate = 0.0024549999999999997
Epoch 207/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7917Epoch 00206: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0012 - acc: 0.7918 - val_loss: 0.0010 - val_acc: 0.8023
Learning rate = 0.0024475
Epoch 208/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7867Epoch 00207: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7868 - val_loss: 9.7797e-04 - val_acc: 0.8023
Learning rate = 0.0024400000000000003
Epoch 209/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7971Epoch 00208: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0012 - acc: 0.7976 - val_loss: 0.0011 - val_acc: 0.7841
Learning rate = 0.0024325
Epoch 210/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7935Epoch 00209: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7935 - val_loss: 0.0010 - val_acc: 0.8114
Learning rate = 0.002425
Epoch 211/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7844Epoch 00210: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7841 - val_loss: 0.0010 - val_acc: 0.8386
Learning rate = 0.0024175000000000004
Epoch 212/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7855Epoch 00211: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7847 - val_loss: 0.0010 - val_acc: 0.8159
Learning rate = 0.0024100000000000002
Epoch 213/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7847Epoch 00212: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0012 - acc: 0.7850 - val_loss: 0.0011 - val_acc: 0.7773
Learning rate = 0.0024025
Epoch 214/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7982Epoch 00213: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7982 - val_loss: 0.0010 - val_acc: 0.7977
Learning rate = 0.0023950000000000004
Epoch 215/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7853Epoch 00214: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7853 - val_loss: 0.0010 - val_acc: 0.7932
Learning rate = 0.0023875
Epoch 216/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7870Epoch 00215: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0012 - acc: 0.7871 - val_loss: 0.0012 - val_acc: 0.8023
Learning rate = 0.00238
Epoch 217/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7932Epoch 00216: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7938 - val_loss: 0.0010 - val_acc: 0.8295
Learning rate = 0.0023725
Epoch 218/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7959Epoch 00217: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7962 - val_loss: 9.7353e-04 - val_acc: 0.8091
Learning rate = 0.002365
Epoch 219/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7897Epoch 00218: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7891 - val_loss: 0.0010 - val_acc: 0.8295
Learning rate = 0.0023575000000000002
Epoch 220/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7950Epoch 00219: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0012 - acc: 0.7950 - val_loss: 9.9567e-04 - val_acc: 0.8455
Learning rate = 0.00235
Epoch 221/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7917Epoch 00220: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0012 - acc: 0.7915 - val_loss: 0.0011 - val_acc: 0.8114
Learning rate = 0.0023425
Epoch 222/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7853Epoch 00221: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0012 - acc: 0.7859 - val_loss: 9.5433e-04 - val_acc: 0.8205
Learning rate = 0.0023350000000000003
Epoch 223/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7979Epoch 00222: val_loss improved from 0.00094 to 0.00093, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0012 - acc: 0.7979 - val_loss: 9.3260e-04 - val_acc: 0.8045
Learning rate = 0.0023275
Epoch 224/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7914Epoch 00223: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0012 - acc: 0.7906 - val_loss: 0.0010 - val_acc: 0.8000
Learning rate = 0.00232
Epoch 225/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.7988Epoch 00224: val_loss did not improve
3400/3400 [==============================] - 179s - loss: 0.0012 - acc: 0.7988 - val_loss: 9.9396e-04 - val_acc: 0.8205
Learning rate = 0.0023125000000000003
Epoch 226/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8041Epoch 00225: val_loss improved from 0.00093 to 0.00093, saving model to m34.h5
3400/3400 [==============================] - 176s - loss: 0.0011 - acc: 0.8041 - val_loss: 9.2720e-04 - val_acc: 0.8273
Learning rate = 0.0023049999999999998
Epoch 227/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0012 - acc: 0.8000Epoch 00226: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0012 - acc: 0.7997 - val_loss: 0.0011 - val_acc: 0.8000
Learning rate = 0.0022975
Epoch 228/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7897Epoch 00227: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.7894 - val_loss: 9.6256e-04 - val_acc: 0.8182
Learning rate = 0.0022900000000000004
Epoch 229/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7941Epoch 00228: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.7941 - val_loss: 0.0010 - val_acc: 0.8227
Learning rate = 0.0022825
Epoch 230/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7891Epoch 00229: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0011 - acc: 0.7894 - val_loss: 9.6779e-04 - val_acc: 0.8295
Learning rate = 0.002275
Epoch 231/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8021Epoch 00230: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.8024 - val_loss: 9.9849e-04 - val_acc: 0.8000
Learning rate = 0.0022675
Epoch 232/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7956Epoch 00231: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.7956 - val_loss: 9.5327e-04 - val_acc: 0.8159
Learning rate = 0.00226
Epoch 233/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7858Epoch 00232: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0011 - acc: 0.7859 - val_loss: 9.5672e-04 - val_acc: 0.8023
Learning rate = 0.0022525
Epoch 234/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7982Epoch 00233: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.7982 - val_loss: 0.0011 - val_acc: 0.8318
Learning rate = 0.002245
Epoch 235/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7982Epoch 00234: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0011 - acc: 0.7979 - val_loss: 9.4786e-04 - val_acc: 0.8227
Learning rate = 0.0022375
Epoch 236/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8032Epoch 00235: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.8035 - val_loss: 9.8756e-04 - val_acc: 0.8205
Learning rate = 0.00223
Epoch 237/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8091Epoch 00236: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0011 - acc: 0.8091 - val_loss: 9.5537e-04 - val_acc: 0.7864
Learning rate = 0.0022225
Epoch 238/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7979Epoch 00237: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.7976 - val_loss: 0.0010 - val_acc: 0.7909
Learning rate = 0.002215
Epoch 239/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7947Epoch 00238: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.7938 - val_loss: 9.3887e-04 - val_acc: 0.8045
Learning rate = 0.0022075000000000003
Epoch 240/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7988Epoch 00239: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.7988 - val_loss: 0.0010 - val_acc: 0.8159
Learning rate = 0.0021999999999999997
Epoch 241/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8041Epoch 00240: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 0.0011 - acc: 0.8044 - val_loss: 9.3625e-04 - val_acc: 0.8295
Learning rate = 0.0021925
Epoch 242/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8121Epoch 00241: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0011 - acc: 0.8124 - val_loss: 0.0010 - val_acc: 0.8159
Learning rate = 0.0021850000000000003
Epoch 243/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.7976Epoch 00242: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0011 - acc: 0.7974 - val_loss: 9.8633e-04 - val_acc: 0.8136
Learning rate = 0.0021775
Epoch 244/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8059Epoch 00243: val_loss improved from 0.00093 to 0.00090, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0011 - acc: 0.8062 - val_loss: 9.0262e-04 - val_acc: 0.8273
Learning rate = 0.00217
Epoch 245/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8024Epoch 00244: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0011 - acc: 0.8021 - val_loss: 9.4027e-04 - val_acc: 0.8227
Learning rate = 0.0021625000000000004
Epoch 246/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8032Epoch 00245: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 0.0011 - acc: 0.8032 - val_loss: 9.8825e-04 - val_acc: 0.8295
Learning rate = 0.002155
Epoch 247/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8044Epoch 00246: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0011 - acc: 0.8041 - val_loss: 9.5917e-04 - val_acc: 0.8432
Learning rate = 0.0021475
Epoch 248/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8080Epoch 00247: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0011 - acc: 0.8074 - val_loss: 9.2800e-04 - val_acc: 0.8182
Learning rate = 0.00214
Epoch 249/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.7988Epoch 00248: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0011 - acc: 0.7985 - val_loss: 9.9969e-04 - val_acc: 0.8182
Learning rate = 0.0021325
Epoch 250/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8065Epoch 00249: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0010 - acc: 0.8068 - val_loss: 9.3170e-04 - val_acc: 0.8227
Learning rate = 0.002125
Epoch 251/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8015Epoch 00250: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0011 - acc: 0.8012 - val_loss: 9.2332e-04 - val_acc: 0.8250
Learning rate = 0.0021175
Epoch 252/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8156Epoch 00251: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0010 - acc: 0.8150 - val_loss: 9.1234e-04 - val_acc: 0.8250
Learning rate = 0.00211
Epoch 253/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8074Epoch 00252: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0011 - acc: 0.8074 - val_loss: 0.0010 - val_acc: 0.8159
Learning rate = 0.0021025
Epoch 254/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0011 - acc: 0.8068Epoch 00253: val_loss improved from 0.00090 to 0.00090, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0011 - acc: 0.8071 - val_loss: 8.9616e-04 - val_acc: 0.8182
Learning rate = 0.002095
Epoch 255/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8088Epoch 00254: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0010 - acc: 0.8091 - val_loss: 9.2832e-04 - val_acc: 0.8386
Learning rate = 0.0020875
Epoch 256/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.7994Epoch 00255: val_loss improved from 0.00090 to 0.00087, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 0.0010 - acc: 0.7997 - val_loss: 8.7236e-04 - val_acc: 0.8205
Learning rate = 0.0020800000000000003
Epoch 257/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8094Epoch 00256: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 0.0010 - acc: 0.8097 - val_loss: 9.4638e-04 - val_acc: 0.8182
Learning rate = 0.0020724999999999997
Epoch 258/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8044Epoch 00257: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0010 - acc: 0.8041 - val_loss: 9.1943e-04 - val_acc: 0.8091
Learning rate = 0.002065
Epoch 259/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8015Epoch 00258: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 0.0010 - acc: 0.8015 - val_loss: 9.1735e-04 - val_acc: 0.8364
Learning rate = 0.0020575000000000003
Epoch 260/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8071Epoch 00259: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0010 - acc: 0.8074 - val_loss: 9.2047e-04 - val_acc: 0.8136
Learning rate = 0.0020499999999999997
Epoch 261/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.8986e-04 - acc: 0.8050Epoch 00260: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.9073e-04 - acc: 0.8047 - val_loss: 9.4021e-04 - val_acc: 0.7591
Learning rate = 0.0020425
Epoch 262/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8071Epoch 00261: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 0.0010 - acc: 0.8071 - val_loss: 9.3045e-04 - val_acc: 0.8386
Learning rate = 0.0020350000000000004
Epoch 263/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.9413e-04 - acc: 0.8094Epoch 00262: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.9330e-04 - acc: 0.8097 - val_loss: 9.1661e-04 - val_acc: 0.8023
Learning rate = 0.0020275
Epoch 264/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.8107e-04 - acc: 0.8077Epoch 00263: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 9.8142e-04 - acc: 0.8074 - val_loss: 9.1944e-04 - val_acc: 0.8295
Learning rate = 0.00202
Epoch 265/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.9263e-04 - acc: 0.8015Epoch 00264: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.9255e-04 - acc: 0.8018 - val_loss: 9.3812e-04 - val_acc: 0.8136
Learning rate = 0.0020125000000000004
Epoch 266/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.9110e-04 - acc: 0.8106Epoch 00265: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.9077e-04 - acc: 0.8112 - val_loss: 9.2990e-04 - val_acc: 0.8227
Learning rate = 0.002005
Epoch 267/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.9375e-04 - acc: 0.8136Epoch 00266: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.9436e-04 - acc: 0.8138 - val_loss: 9.1951e-04 - val_acc: 0.8273
Learning rate = 0.0019975
Epoch 268/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.7194e-04 - acc: 0.8056Epoch 00267: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 9.7146e-04 - acc: 0.8059 - val_loss: 9.2015e-04 - val_acc: 0.8227
Learning rate = 0.00199
Epoch 269/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.8064e-04 - acc: 0.8112Epoch 00268: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.7993e-04 - acc: 0.8115 - val_loss: 9.1521e-04 - val_acc: 0.8159
Learning rate = 0.0019825
Epoch 270/400
3390/3400 [============================>.] - ETA: 0s - loss: 0.0010 - acc: 0.8109Epoch 00269: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 0.0010 - acc: 0.8109 - val_loss: 9.4673e-04 - val_acc: 0.8227
Learning rate = 0.001975
Epoch 271/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.7069e-04 - acc: 0.8124Epoch 00270: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 9.7081e-04 - acc: 0.8124 - val_loss: 8.8818e-04 - val_acc: 0.8136
Learning rate = 0.0019675
Epoch 272/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.7227e-04 - acc: 0.8083Epoch 00271: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.7199e-04 - acc: 0.8085 - val_loss: 9.2922e-04 - val_acc: 0.8227
Learning rate = 0.00196
Epoch 273/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.7699e-04 - acc: 0.8024Epoch 00272: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.7670e-04 - acc: 0.8021 - val_loss: 8.8989e-04 - val_acc: 0.8341
Learning rate = 0.0019525
Epoch 274/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.6823e-04 - acc: 0.8145Epoch 00273: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 9.6879e-04 - acc: 0.8147 - val_loss: 9.0348e-04 - val_acc: 0.8364
Learning rate = 0.0019450000000000001
Epoch 275/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.7998e-04 - acc: 0.8027Epoch 00274: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.7912e-04 - acc: 0.8029 - val_loss: 9.5880e-04 - val_acc: 0.8295
Learning rate = 0.0019375
Epoch 276/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.3648e-04 - acc: 0.8147Epoch 00275: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 9.3693e-04 - acc: 0.8144 - val_loss: 8.8809e-04 - val_acc: 0.8455
Learning rate = 0.0019299999999999999
Epoch 277/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.3660e-04 - acc: 0.8162Epoch 00276: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.3637e-04 - acc: 0.8153 - val_loss: 9.1529e-04 - val_acc: 0.8295
Learning rate = 0.0019225000000000002
Epoch 278/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.4916e-04 - acc: 0.8088Epoch 00277: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 9.4946e-04 - acc: 0.8088 - val_loss: 9.3700e-04 - val_acc: 0.8205
Learning rate = 0.001915
Epoch 279/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.4049e-04 - acc: 0.8142Epoch 00278: val_loss improved from 0.00087 to 0.00087, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 9.4043e-04 - acc: 0.8141 - val_loss: 8.6615e-04 - val_acc: 0.8182
Learning rate = 0.0019075
Epoch 280/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.3855e-04 - acc: 0.8162Epoch 00279: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 9.3821e-04 - acc: 0.8159 - val_loss: 9.4778e-04 - val_acc: 0.8159
Learning rate = 0.0019
Epoch 281/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.4822e-04 - acc: 0.8044Epoch 00280: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.4800e-04 - acc: 0.8041 - val_loss: 9.0110e-04 - val_acc: 0.8386
Learning rate = 0.0018925
Epoch 282/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.3944e-04 - acc: 0.8156Epoch 00281: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.3973e-04 - acc: 0.8156 - val_loss: 8.9844e-04 - val_acc: 0.8295
Learning rate = 0.001885
Epoch 283/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.3441e-04 - acc: 0.8065Epoch 00282: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.3380e-04 - acc: 0.8071 - val_loss: 8.6719e-04 - val_acc: 0.8295
Learning rate = 0.0018775000000000003
Epoch 284/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.3463e-04 - acc: 0.8103Epoch 00283: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.3478e-04 - acc: 0.8100 - val_loss: 9.4975e-04 - val_acc: 0.8227
Learning rate = 0.0018700000000000001
Epoch 285/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.2268e-04 - acc: 0.8218Epoch 00284: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.2209e-04 - acc: 0.8212 - val_loss: 8.8840e-04 - val_acc: 0.8136
Learning rate = 0.0018625
Epoch 286/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.2229e-04 - acc: 0.8103Epoch 00285: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 9.2215e-04 - acc: 0.8103 - val_loss: 9.1808e-04 - val_acc: 0.8182
Learning rate = 0.001855
Epoch 287/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.1712e-04 - acc: 0.8109Epoch 00286: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 9.1694e-04 - acc: 0.8115 - val_loss: 8.6803e-04 - val_acc: 0.8273
Learning rate = 0.0018475000000000002
Epoch 288/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.1475e-04 - acc: 0.8124Epoch 00287: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 9.1462e-04 - acc: 0.8126 - val_loss: 9.4534e-04 - val_acc: 0.8273
Learning rate = 0.00184
Epoch 289/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.2565e-04 - acc: 0.8127Epoch 00288: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 9.2551e-04 - acc: 0.8129 - val_loss: 9.2424e-04 - val_acc: 0.8364
Learning rate = 0.0018325
Epoch 290/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.0310e-04 - acc: 0.8189Epoch 00289: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 9.0377e-04 - acc: 0.8191 - val_loss: 9.3489e-04 - val_acc: 0.8273
Learning rate = 0.001825
Epoch 291/400
3390/3400 [============================>.] - ETA: 0s - loss: 9.1454e-04 - acc: 0.8133Epoch 00290: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 9.1425e-04 - acc: 0.8138 - val_loss: 8.9098e-04 - val_acc: 0.8000
Learning rate = 0.0018175
Epoch 292/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.9045e-04 - acc: 0.8145Epoch 00291: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.9013e-04 - acc: 0.8150 - val_loss: 8.7005e-04 - val_acc: 0.8432
Learning rate = 0.00181
Epoch 293/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.8600e-04 - acc: 0.8136Epoch 00292: val_loss improved from 0.00087 to 0.00085, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 8.8619e-04 - acc: 0.8135 - val_loss: 8.5490e-04 - val_acc: 0.8091
Learning rate = 0.0018025
Epoch 294/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.7852e-04 - acc: 0.8239Epoch 00293: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 8.7871e-04 - acc: 0.8241 - val_loss: 8.6767e-04 - val_acc: 0.8409
Learning rate = 0.0017950000000000002
Epoch 295/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.8197e-04 - acc: 0.8139Epoch 00294: val_loss improved from 0.00085 to 0.00083, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 8.8218e-04 - acc: 0.8141 - val_loss: 8.3146e-04 - val_acc: 0.8227
Learning rate = 0.0017875
Epoch 296/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.8970e-04 - acc: 0.8115Epoch 00295: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 8.8953e-04 - acc: 0.8115 - val_loss: 8.9515e-04 - val_acc: 0.8273
Learning rate = 0.00178
Epoch 297/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.8917e-04 - acc: 0.8186Epoch 00296: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.8860e-04 - acc: 0.8185 - val_loss: 8.7212e-04 - val_acc: 0.8227
Learning rate = 0.0017725
Epoch 298/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.7762e-04 - acc: 0.8307Epoch 00297: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 8.7701e-04 - acc: 0.8312 - val_loss: 8.8181e-04 - val_acc: 0.8205
Learning rate = 0.001765
Epoch 299/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.7123e-04 - acc: 0.8124Epoch 00298: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.7136e-04 - acc: 0.8126 - val_loss: 9.3596e-04 - val_acc: 0.8091
Learning rate = 0.0017575
Epoch 300/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.7812e-04 - acc: 0.8153Epoch 00299: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.7763e-04 - acc: 0.8153 - val_loss: 8.5503e-04 - val_acc: 0.8068
Learning rate = 0.00175
Epoch 301/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.6525e-04 - acc: 0.8263Epoch 00300: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.6455e-04 - acc: 0.8265 - val_loss: 8.6911e-04 - val_acc: 0.8023
Learning rate = 0.0017425000000000001
Epoch 302/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.5071e-04 - acc: 0.8109Epoch 00301: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.5018e-04 - acc: 0.8112 - val_loss: 8.5312e-04 - val_acc: 0.8273
Learning rate = 0.001735
Epoch 303/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.5436e-04 - acc: 0.8209Epoch 00302: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 8.5382e-04 - acc: 0.8212 - val_loss: 8.6305e-04 - val_acc: 0.8205
Learning rate = 0.0017274999999999999
Epoch 304/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.6540e-04 - acc: 0.8189Epoch 00303: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.6546e-04 - acc: 0.8182 - val_loss: 8.4793e-04 - val_acc: 0.8250
Learning rate = 0.0017200000000000002
Epoch 305/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.4077e-04 - acc: 0.8277Epoch 00304: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 8.4067e-04 - acc: 0.8268 - val_loss: 8.5617e-04 - val_acc: 0.8364
Learning rate = 0.0017125
Epoch 306/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.5688e-04 - acc: 0.8189Epoch 00305: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 8.5624e-04 - acc: 0.8191 - val_loss: 8.3664e-04 - val_acc: 0.8523
Learning rate = 0.0017050000000000001
Epoch 307/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.4846e-04 - acc: 0.8301Epoch 00306: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.5184e-04 - acc: 0.8294 - val_loss: 8.3644e-04 - val_acc: 0.8295
Learning rate = 0.0016975000000000002
Epoch 308/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.4894e-04 - acc: 0.8277Epoch 00307: val_loss improved from 0.00083 to 0.00080, saving model to m34.h5
3400/3400 [==============================] - 178s - loss: 8.4870e-04 - acc: 0.8279 - val_loss: 8.0170e-04 - val_acc: 0.8227
Learning rate = 0.00169
Epoch 309/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.3327e-04 - acc: 0.8198Epoch 00308: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 8.3313e-04 - acc: 0.8197 - val_loss: 8.6151e-04 - val_acc: 0.8341
Learning rate = 0.0016825
Epoch 310/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.3197e-04 - acc: 0.8224Epoch 00309: val_loss improved from 0.00080 to 0.00079, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 8.3149e-04 - acc: 0.8218 - val_loss: 7.8801e-04 - val_acc: 0.8386
Learning rate = 0.001675
Epoch 311/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.2519e-04 - acc: 0.8206Epoch 00310: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 8.2525e-04 - acc: 0.8200 - val_loss: 8.6429e-04 - val_acc: 0.8182
Learning rate = 0.0016675000000000001
Epoch 312/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.1500e-04 - acc: 0.8324Epoch 00311: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.1455e-04 - acc: 0.8324 - val_loss: 8.2699e-04 - val_acc: 0.8386
Learning rate = 0.00166
Epoch 313/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.2835e-04 - acc: 0.8295Epoch 00312: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.2827e-04 - acc: 0.8288 - val_loss: 8.0582e-04 - val_acc: 0.8295
Learning rate = 0.0016524999999999999
Epoch 314/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.2230e-04 - acc: 0.8198Epoch 00313: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.2233e-04 - acc: 0.8194 - val_loss: 8.0672e-04 - val_acc: 0.8386
Learning rate = 0.0016450000000000002
Epoch 315/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.2282e-04 - acc: 0.8192Epoch 00314: val_loss did not improve
3400/3400 [==============================] - 172s - loss: 8.2275e-04 - acc: 0.8182 - val_loss: 8.7041e-04 - val_acc: 0.8477
Learning rate = 0.0016375
Epoch 316/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.1806e-04 - acc: 0.8280Epoch 00315: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 8.1812e-04 - acc: 0.8279 - val_loss: 8.2029e-04 - val_acc: 0.8318
Learning rate = 0.00163
Epoch 317/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.9849e-04 - acc: 0.8342Epoch 00316: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 7.9910e-04 - acc: 0.8344 - val_loss: 8.3237e-04 - val_acc: 0.8477
Learning rate = 0.0016225
Epoch 318/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.1847e-04 - acc: 0.8313Epoch 00317: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 8.2028e-04 - acc: 0.8303 - val_loss: 8.3427e-04 - val_acc: 0.8409
Learning rate = 0.0016150000000000001
Epoch 319/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.0381e-04 - acc: 0.8295Epoch 00318: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 8.0362e-04 - acc: 0.8294 - val_loss: 8.5113e-04 - val_acc: 0.8273
Learning rate = 0.0016075
Epoch 320/400
3390/3400 [============================>.] - ETA: 0s - loss: 8.0795e-04 - acc: 0.8289Epoch 00319: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 8.0761e-04 - acc: 0.8288 - val_loss: 8.1275e-04 - val_acc: 0.8318
Learning rate = 0.0015999999999999999
Epoch 321/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.9877e-04 - acc: 0.8313Epoch 00320: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.9907e-04 - acc: 0.8315 - val_loss: 8.1256e-04 - val_acc: 0.8273
Learning rate = 0.0015925000000000002
Epoch 322/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.9358e-04 - acc: 0.8322Epoch 00321: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 7.9311e-04 - acc: 0.8321 - val_loss: 8.2851e-04 - val_acc: 0.8273
Learning rate = 0.001585
Epoch 323/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.8999e-04 - acc: 0.8274Epoch 00322: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.9059e-04 - acc: 0.8271 - val_loss: 8.0488e-04 - val_acc: 0.8227
Learning rate = 0.0015775
Epoch 324/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.8862e-04 - acc: 0.8248Epoch 00323: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 7.8854e-04 - acc: 0.8250 - val_loss: 8.0399e-04 - val_acc: 0.8250
Learning rate = 0.00157
Epoch 325/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.9129e-04 - acc: 0.8307Epoch 00324: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.9369e-04 - acc: 0.8306 - val_loss: 8.0954e-04 - val_acc: 0.8318
Learning rate = 0.0015625
Epoch 326/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.8001e-04 - acc: 0.8336Epoch 00325: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 7.8009e-04 - acc: 0.8341 - val_loss: 8.2558e-04 - val_acc: 0.8227
Learning rate = 0.001555
Epoch 327/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.7455e-04 - acc: 0.8304Epoch 00326: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.7437e-04 - acc: 0.8306 - val_loss: 7.9544e-04 - val_acc: 0.8341
Learning rate = 0.0015475
Epoch 328/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.6310e-04 - acc: 0.8375Epoch 00327: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 7.6246e-04 - acc: 0.8374 - val_loss: 8.3145e-04 - val_acc: 0.8136
Learning rate = 0.0015400000000000001
Epoch 329/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.6448e-04 - acc: 0.8236Epoch 00328: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 7.6434e-04 - acc: 0.8226 - val_loss: 8.0212e-04 - val_acc: 0.8159
Learning rate = 0.0015325
Epoch 330/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.6828e-04 - acc: 0.8322Epoch 00329: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.6872e-04 - acc: 0.8321 - val_loss: 7.9162e-04 - val_acc: 0.8318
Learning rate = 0.0015249999999999999
Epoch 331/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.6257e-04 - acc: 0.8319Epoch 00330: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.6249e-04 - acc: 0.8318 - val_loss: 8.3605e-04 - val_acc: 0.8227
Learning rate = 0.0015175000000000002
Epoch 332/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.7175e-04 - acc: 0.8268Epoch 00331: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 7.7153e-04 - acc: 0.8265 - val_loss: 7.9747e-04 - val_acc: 0.8341
Learning rate = 0.00151
Epoch 333/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.4206e-04 - acc: 0.8440Epoch 00332: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.4226e-04 - acc: 0.8435 - val_loss: 7.9235e-04 - val_acc: 0.8364
Learning rate = 0.0015025
Epoch 334/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.6686e-04 - acc: 0.8265Epoch 00333: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.6689e-04 - acc: 0.8271 - val_loss: 8.0373e-04 - val_acc: 0.8136
Learning rate = 0.001495
Epoch 335/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.6397e-04 - acc: 0.8339Epoch 00334: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 7.6387e-04 - acc: 0.8341 - val_loss: 8.2549e-04 - val_acc: 0.8182
Learning rate = 0.0014875
Epoch 336/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.4480e-04 - acc: 0.8333Epoch 00335: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.4449e-04 - acc: 0.8335 - val_loss: 8.2589e-04 - val_acc: 0.8318
Learning rate = 0.00148
Epoch 337/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.4645e-04 - acc: 0.8381Epoch 00336: val_loss improved from 0.00079 to 0.00079, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 7.4572e-04 - acc: 0.8382 - val_loss: 7.8610e-04 - val_acc: 0.8455
Learning rate = 0.0014725
Epoch 338/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.5544e-04 - acc: 0.8351Epoch 00337: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.5565e-04 - acc: 0.8344 - val_loss: 8.2501e-04 - val_acc: 0.8250
Learning rate = 0.001465
Epoch 339/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.4189e-04 - acc: 0.8383Epoch 00338: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 7.4232e-04 - acc: 0.8388 - val_loss: 8.1708e-04 - val_acc: 0.8341
Learning rate = 0.0014575
Epoch 340/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.5184e-04 - acc: 0.8319Epoch 00339: val_loss improved from 0.00079 to 0.00078, saving model to m34.h5
3400/3400 [==============================] - 176s - loss: 7.5114e-04 - acc: 0.8324 - val_loss: 7.8243e-04 - val_acc: 0.8341
Learning rate = 0.00145
Epoch 341/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.2984e-04 - acc: 0.8392Epoch 00340: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 7.2989e-04 - acc: 0.8394 - val_loss: 8.0080e-04 - val_acc: 0.8500
Learning rate = 0.0014425
Epoch 342/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.3511e-04 - acc: 0.8339Epoch 00341: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.3474e-04 - acc: 0.8341 - val_loss: 7.8543e-04 - val_acc: 0.8386
Learning rate = 0.001435
Epoch 343/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.3774e-04 - acc: 0.8440Epoch 00342: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.3892e-04 - acc: 0.8441 - val_loss: 7.8861e-04 - val_acc: 0.8386
Learning rate = 0.0014275
Epoch 344/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.2723e-04 - acc: 0.8245Epoch 00343: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 7.2739e-04 - acc: 0.8247 - val_loss: 8.1503e-04 - val_acc: 0.8364
Learning rate = 0.00142
Epoch 345/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.2981e-04 - acc: 0.8404Epoch 00344: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.2981e-04 - acc: 0.8403 - val_loss: 8.2264e-04 - val_acc: 0.8455
Learning rate = 0.0014125000000000001
Epoch 346/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.2639e-04 - acc: 0.8386Epoch 00345: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.2624e-04 - acc: 0.8391 - val_loss: 7.9965e-04 - val_acc: 0.8477
Learning rate = 0.001405
Epoch 347/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.2998e-04 - acc: 0.8357Epoch 00346: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.2946e-04 - acc: 0.8353 - val_loss: 7.8283e-04 - val_acc: 0.8341
Learning rate = 0.0013975
Epoch 348/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.1349e-04 - acc: 0.8369Epoch 00347: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 7.1387e-04 - acc: 0.8365 - val_loss: 7.9793e-04 - val_acc: 0.8318
Learning rate = 0.00139
Epoch 349/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.0799e-04 - acc: 0.8363Epoch 00348: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 7.0886e-04 - acc: 0.8362 - val_loss: 7.9561e-04 - val_acc: 0.8250
Learning rate = 0.0013825
Epoch 350/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.0865e-04 - acc: 0.8301Epoch 00349: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 7.0820e-04 - acc: 0.8300 - val_loss: 8.0407e-04 - val_acc: 0.8364
Learning rate = 0.001375
Epoch 351/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.1420e-04 - acc: 0.8342Epoch 00350: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 7.1508e-04 - acc: 0.8341 - val_loss: 7.8356e-04 - val_acc: 0.8455
Learning rate = 0.0013675
Epoch 352/400
3390/3400 [============================>.] - ETA: 0s - loss: 7.0951e-04 - acc: 0.8345Epoch 00351: val_loss improved from 0.00078 to 0.00078, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 7.0928e-04 - acc: 0.8335 - val_loss: 7.7616e-04 - val_acc: 0.8273
Learning rate = 0.00136
Epoch 353/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.9469e-04 - acc: 0.8392Epoch 00352: val_loss improved from 0.00078 to 0.00076, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 6.9514e-04 - acc: 0.8391 - val_loss: 7.6112e-04 - val_acc: 0.8432
Learning rate = 0.0013525
Epoch 354/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.9997e-04 - acc: 0.8375Epoch 00353: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 7.0032e-04 - acc: 0.8376 - val_loss: 7.9859e-04 - val_acc: 0.8386
Learning rate = 0.001345
Epoch 355/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.8878e-04 - acc: 0.8431Epoch 00354: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.8895e-04 - acc: 0.8429 - val_loss: 8.1062e-04 - val_acc: 0.8386
Learning rate = 0.0013375000000000001
Epoch 356/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.9389e-04 - acc: 0.8280Epoch 00355: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.9359e-04 - acc: 0.8279 - val_loss: 7.9424e-04 - val_acc: 0.8182
Learning rate = 0.00133
Epoch 357/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.8506e-04 - acc: 0.8366Epoch 00356: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.8533e-04 - acc: 0.8368 - val_loss: 7.8947e-04 - val_acc: 0.8182
Learning rate = 0.0013225
Epoch 358/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.8569e-04 - acc: 0.8410Epoch 00357: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.8574e-04 - acc: 0.8406 - val_loss: 8.0369e-04 - val_acc: 0.8523
Learning rate = 0.001315
Epoch 359/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.9203e-04 - acc: 0.8454Epoch 00358: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.9143e-04 - acc: 0.8453 - val_loss: 7.8504e-04 - val_acc: 0.8341
Learning rate = 0.0013075
Epoch 360/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.8525e-04 - acc: 0.8386Epoch 00359: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.8608e-04 - acc: 0.8385 - val_loss: 7.6413e-04 - val_acc: 0.8318
Learning rate = 0.0013
Epoch 361/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.8464e-04 - acc: 0.8369Epoch 00360: val_loss improved from 0.00076 to 0.00075, saving model to m34.h5
3400/3400 [==============================] - 173s - loss: 6.8453e-04 - acc: 0.8374 - val_loss: 7.5462e-04 - val_acc: 0.8273
Learning rate = 0.0012925
Epoch 362/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.8486e-04 - acc: 0.8472Epoch 00361: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.8515e-04 - acc: 0.8468 - val_loss: 7.6056e-04 - val_acc: 0.8318
Learning rate = 0.0012850000000000001
Epoch 363/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.7489e-04 - acc: 0.8410Epoch 00362: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.7521e-04 - acc: 0.8409 - val_loss: 7.6775e-04 - val_acc: 0.8250
Learning rate = 0.0012775
Epoch 364/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.8691e-04 - acc: 0.8434Epoch 00363: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.8656e-04 - acc: 0.8432 - val_loss: 7.8522e-04 - val_acc: 0.8273
Learning rate = 0.00127
Epoch 365/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.7279e-04 - acc: 0.8407Epoch 00364: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 6.7273e-04 - acc: 0.8412 - val_loss: 7.6566e-04 - val_acc: 0.8273
Learning rate = 0.0012625
Epoch 366/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.7264e-04 - acc: 0.8407Epoch 00365: val_loss improved from 0.00075 to 0.00074, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 6.7252e-04 - acc: 0.8409 - val_loss: 7.4230e-04 - val_acc: 0.8523
Learning rate = 0.001255
Epoch 367/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.5951e-04 - acc: 0.8407Epoch 00366: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 6.5941e-04 - acc: 0.8406 - val_loss: 7.6539e-04 - val_acc: 0.8250
Learning rate = 0.0012475
Epoch 368/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.6666e-04 - acc: 0.8401Epoch 00367: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 6.6653e-04 - acc: 0.8403 - val_loss: 7.6245e-04 - val_acc: 0.8273
Learning rate = 0.00124
Epoch 369/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.6088e-04 - acc: 0.8460Epoch 00368: val_loss did not improve
3400/3400 [==============================] - 180s - loss: 6.6121e-04 - acc: 0.8459 - val_loss: 7.9113e-04 - val_acc: 0.8364
Learning rate = 0.0012325
Epoch 370/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.6222e-04 - acc: 0.8434Epoch 00369: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.6210e-04 - acc: 0.8435 - val_loss: 7.9471e-04 - val_acc: 0.8341
Learning rate = 0.001225
Epoch 371/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.5964e-04 - acc: 0.8469Epoch 00370: val_loss did not improve
3400/3400 [==============================] - 182s - loss: 6.5920e-04 - acc: 0.8471 - val_loss: 7.7054e-04 - val_acc: 0.8341
Learning rate = 0.0012175
Epoch 372/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.5941e-04 - acc: 0.8386Epoch 00371: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.5964e-04 - acc: 0.8388 - val_loss: 7.9450e-04 - val_acc: 0.8205
Learning rate = 0.0012100000000000001
Epoch 373/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.5874e-04 - acc: 0.8445Epoch 00372: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.5838e-04 - acc: 0.8444 - val_loss: 7.7178e-04 - val_acc: 0.8318
Learning rate = 0.0012025
Epoch 374/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.4290e-04 - acc: 0.8445Epoch 00373: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.4304e-04 - acc: 0.8441 - val_loss: 8.0451e-04 - val_acc: 0.8295
Learning rate = 0.001195
Epoch 375/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.4650e-04 - acc: 0.8466Epoch 00374: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 6.4642e-04 - acc: 0.8462 - val_loss: 7.7208e-04 - val_acc: 0.8250
Learning rate = 0.0011875
Epoch 376/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.3791e-04 - acc: 0.8466Epoch 00375: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.3792e-04 - acc: 0.8471 - val_loss: 7.6985e-04 - val_acc: 0.8432
Learning rate = 0.00118
Epoch 377/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.4407e-04 - acc: 0.8454Epoch 00376: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.4427e-04 - acc: 0.8456 - val_loss: 7.7459e-04 - val_acc: 0.8295
Learning rate = 0.0011725
Epoch 378/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.4191e-04 - acc: 0.8463Epoch 00377: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.4167e-04 - acc: 0.8462 - val_loss: 7.6193e-04 - val_acc: 0.8500
Learning rate = 0.001165
Epoch 379/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.2583e-04 - acc: 0.8490Epoch 00378: val_loss did not improve
3400/3400 [==============================] - 173s - loss: 6.2597e-04 - acc: 0.8488 - val_loss: 7.5854e-04 - val_acc: 0.8364
Learning rate = 0.0011575000000000001
Epoch 380/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.3824e-04 - acc: 0.8496Epoch 00379: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.3797e-04 - acc: 0.8500 - val_loss: 7.6339e-04 - val_acc: 0.8386
Learning rate = 0.00115
Epoch 381/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.3131e-04 - acc: 0.8484Epoch 00380: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 6.3211e-04 - acc: 0.8485 - val_loss: 7.5678e-04 - val_acc: 0.8455
Learning rate = 0.0011425
Epoch 382/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.3024e-04 - acc: 0.8425Epoch 00381: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 6.3014e-04 - acc: 0.8426 - val_loss: 7.9443e-04 - val_acc: 0.8523
Learning rate = 0.001135
Epoch 383/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.2800e-04 - acc: 0.8490Epoch 00382: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.2824e-04 - acc: 0.8491 - val_loss: 7.8054e-04 - val_acc: 0.8545
Learning rate = 0.0011275
Epoch 384/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.1326e-04 - acc: 0.8472Epoch 00383: val_loss improved from 0.00074 to 0.00074, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 6.1310e-04 - acc: 0.8474 - val_loss: 7.4025e-04 - val_acc: 0.8455
Learning rate = 0.0011200000000000001
Epoch 385/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.1856e-04 - acc: 0.8513Epoch 00384: val_loss improved from 0.00074 to 0.00074, saving model to m34.h5
3400/3400 [==============================] - 175s - loss: 6.1863e-04 - acc: 0.8515 - val_loss: 7.4007e-04 - val_acc: 0.8500
Learning rate = 0.0011125
Epoch 386/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.1581e-04 - acc: 0.8484Epoch 00385: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.1601e-04 - acc: 0.8488 - val_loss: 7.4024e-04 - val_acc: 0.8341
Learning rate = 0.001105
Epoch 387/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.1999e-04 - acc: 0.8519Epoch 00386: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.1981e-04 - acc: 0.8521 - val_loss: 7.5283e-04 - val_acc: 0.8432
Learning rate = 0.0010975
Epoch 388/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.9477e-04 - acc: 0.8516Epoch 00387: val_loss improved from 0.00074 to 0.00073, saving model to m34.h5
3400/3400 [==============================] - 174s - loss: 5.9526e-04 - acc: 0.8518 - val_loss: 7.3224e-04 - val_acc: 0.8432
Learning rate = 0.00109
Epoch 389/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.1014e-04 - acc: 0.8454Epoch 00388: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.0991e-04 - acc: 0.8459 - val_loss: 7.4032e-04 - val_acc: 0.8523
Learning rate = 0.0010825000000000001
Epoch 390/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.0799e-04 - acc: 0.8496Epoch 00389: val_loss did not improve
3400/3400 [==============================] - 178s - loss: 6.0742e-04 - acc: 0.8500 - val_loss: 7.5592e-04 - val_acc: 0.8250
Learning rate = 0.001075
Epoch 391/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.0572e-04 - acc: 0.8481Epoch 00390: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.0591e-04 - acc: 0.8482 - val_loss: 7.4006e-04 - val_acc: 0.8409
Learning rate = 0.0010675
Epoch 392/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.1340e-04 - acc: 0.8445Epoch 00391: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 6.1361e-04 - acc: 0.8441 - val_loss: 7.4520e-04 - val_acc: 0.8364
Learning rate = 0.00106
Epoch 393/400
3390/3400 [============================>.] - ETA: 0s - loss: 6.0154e-04 - acc: 0.8525Epoch 00392: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 6.0148e-04 - acc: 0.8526 - val_loss: 7.4723e-04 - val_acc: 0.8409
Learning rate = 0.0010525
Epoch 394/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.9816e-04 - acc: 0.8522Epoch 00393: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 5.9901e-04 - acc: 0.8521 - val_loss: 7.4337e-04 - val_acc: 0.8341
Learning rate = 0.001045
Epoch 395/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.9567e-04 - acc: 0.8499Epoch 00394: val_loss did not improve
3400/3400 [==============================] - 174s - loss: 5.9537e-04 - acc: 0.8503 - val_loss: 7.5518e-04 - val_acc: 0.8364
Learning rate = 0.0010375
Epoch 396/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.8423e-04 - acc: 0.8537Epoch 00395: val_loss did not improve
3400/3400 [==============================] - 176s - loss: 5.8390e-04 - acc: 0.8538 - val_loss: 7.5115e-04 - val_acc: 0.8386
Learning rate = 0.00103
Epoch 397/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.9053e-04 - acc: 0.8578Epoch 00396: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 5.9028e-04 - acc: 0.8579 - val_loss: 7.5047e-04 - val_acc: 0.8250
Learning rate = 0.0010225
Epoch 398/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.8195e-04 - acc: 0.8475Epoch 00397: val_loss did not improve
3400/3400 [==============================] - 177s - loss: 5.8202e-04 - acc: 0.8476 - val_loss: 7.5296e-04 - val_acc: 0.8364
Learning rate = 0.001015
Epoch 399/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.7874e-04 - acc: 0.8552Epoch 00398: val_loss did not improve
3400/3400 [==============================] - 175s - loss: 5.7895e-04 - acc: 0.8553 - val_loss: 7.4657e-04 - val_acc: 0.8341
Learning rate = 0.0010075
Epoch 400/400
3390/3400 [============================>.] - ETA: 0s - loss: 5.8814e-04 - acc: 0.8481Epoch 00399: val_loss did not improve
3400/3400 [==============================] - 192s - loss: 5.8817e-04 - acc: 0.8482 - val_loss: 7.5117e-04 - val_acc: 0.8409

Step 7: Visualize the Loss and Test Predictions

(IMPLEMENTATION) Answer a few questions and visualize the loss

Question 1: Outline the steps you took to get to your final neural network architecture and your reasoning at each step.

Answer: I've tried a lot of different architectures (34 to be precise). The results are shown in the table (see below, or "results_summary.xlsx"). Model 1-20 I was playing with a structure of a model. Models 20-26: choosing optimizer. Models 27-34 fine-tuning and additional features

table1 table2

Question 2: Defend your choice of optimizer. Which optimizers did you test, and how did you determine which worked best?

Answer: Models 20-26 shows the difference betwwen optimizers. Adam, rmsprop and adamax were pretty good, but I prefer adamax. See validation and training loss for different models below.

rmsprop: rmsprop

SGD: sgd

adagrad: adagrad

adadelta: adadelta

adam: adam

adamax: adamax

nadam: nadam

Use the code cell below to plot the training and validation loss of your neural network. You may find this resource useful.

In [53]:
plt.plot(h34.history['loss'])
plt.plot(h34.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()

plt.plot(h34.history['acc'])
plt.plot(h34.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()

Question 3: Do you notice any evidence of overfitting or underfitting in the above plot? If so, what steps have you taken to improve your model? Note that slight overfitting or underfitting will not hurt your chances of a successful submission, as long as you have attempted some solutions towards improving your model (such as regularization, dropout, increased/decreased number of layers, etc).

Answer: Plot shows model "m34"(equal to my final model "m34" but with the reduced batch size, purely I've lost a plot for m33). I overfits a bit at the end, but overall it shows rather good result. My previous models overfits more, but using a Dropout layers + augmentation of images helps.

My final model has val_loss = 0.00069 (6.9e-04). I see two ways to improve it:

  • add more augmented images (rotating for 5-10 degrees for example)
  • use full training set (7000 images instead of 2000) After it, it will be possible to do a bit more aggressive dropout and increase number of epochs

Visualize a Subset of the Test Predictions

Execute the code cell below to visualize your model's predicted keypoints on a subset of the testing images.

In [58]:
m33 = load_model('m33.h5')

import matplotlib.pyplot as plt
%matplotlib inline

y_test = m33.predict(X_test)
fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_test[i], y_test[i], ax)

Step 8: Complete the pipeline

With the work you did in Sections 1 and 2 of this notebook, along with your freshly trained facial keypoint detector, you can now complete the full pipeline. That is given a color image containing a person or persons you can now

  • Detect the faces in this image automatically using OpenCV
  • Predict the facial keypoints in each face detected in the image
  • Paint predicted keypoints on each face detected

In this Subsection you will do just this!

(IMPLEMENTATION) Facial Keypoints Detector

Use the OpenCV face detection functionality you built in previous Sections to expand the functionality of your keypoints detector to color images with arbitrary size. Your function should perform the following steps

  1. Accept a color image.
  2. Convert the image to grayscale.
  3. Detect and crop the face contained in the image.
  4. Locate the facial keypoints in the cropped image.
  5. Overlay the facial keypoints in the original (color, uncropped) image.

Note: step 4 can be the trickiest because remember your convolutional network is only trained to detect facial keypoints in $96 \times 96$ grayscale images where each pixel was normalized to lie in the interval $[0,1]$, and remember that each facial keypoint was normalized during training to the interval $[-1,1]$. This means - practically speaking - to paint detected keypoints onto a test face you need to perform this same pre-processing to your candidate face - that is after detecting it you should resize it to $96 \times 96$ and normalize its values before feeding it into your facial keypoint detector. To be shown correctly on the original image the output keypoints from your detector then need to be shifted and re-normalized from the interval $[-1,1]$ to the width and height of your detected face.

When complete you should be able to produce example images like the one below

In [55]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')


# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# plot our image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('image')
ax1.imshow(image)
Out[55]:
<matplotlib.image.AxesImage at 0x14d26978>
In [79]:
def detect_points(img, scaleFactor=1.5, minNeighbors = 5):
    
    image_with_detections, faces = detect_faces(img, scaleFactor, minNeighbors)
    
    if len(faces) > 0 :
        point_coordinates = []

        for (x,y,w,h) in faces:

            face = image_gray[y:y+h,x:x+w]

            face = np.vstack(face) / 255
            face = face.astype(np.float32)
            orig_x, orig_y = face.shape
            face = cv2.resize(face, (96, 96), interpolation = cv2.INTER_CUBIC)
            prediction = m33.predict(np.expand_dims(np.expand_dims(face, axis=-1), axis=0))

            prediction = prediction * 48 + 48
            pred_x = x + prediction[0, 0::2] * orig_x / 96
            pred_y = y + prediction[0, 1::2] * orig_y / 96

            for (xp, yp) in zip(pred_x, pred_y):
                cv2.circle(image_with_detections,(xp,yp),3,(0,255,0), -1)
            
            point_coordinates.append((pred_x, pred_y))

        return image_with_detections, faces, point_coordinates
    else:
        return img, (None, None)

result_image, _, _ = detect_points(image)
        
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('image')
ax1.imshow(result_image)
Out[79]:
<matplotlib.image.AxesImage at 0x549d0898>

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add facial keypoint detection to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for keypoint detection and marking in the previous exercise and you should be good to go!

In [80]:
import cv2
import time 
from keras.models import load_model
def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # keep video stream open
    while rval:
        image_with_points, _, _ = detect_points(frame, 1.1, 3)
        
        cv2.imshow("face detection activated", image_with_points)
        
        # exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key < 255: # exit by pressing any key
            # destroy windows
            cv2.destroyAllWindows()
            
            # hack from stack overflow for making sure window closes on osx --> https://stackoverflow.com/questions/6116564/destroywindow-does-not-close-window-on-mac-using-python-and-opencv
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()  
In [78]:
# Run your keypoint face painter
laptop_camera_go()
In [ ]:
 

(Optional) Further Directions - add a filter using facial keypoints

Using your freshly minted facial keypoint detector pipeline you can now do things like add fun filters to a person's face automatically. In this optional exercise you can play around with adding sunglasses automatically to each individual's face in an image as shown in a demonstration image below.

To produce this effect an image of a pair of sunglasses shown in the Python cell below.

In [62]:
# Load in sunglasses image - note the usage of the special option
# cv2.IMREAD_UNCHANGED, this option is used because the sunglasses 
# image has a 4th channel that allows us to control how transparent each pixel in the image is
sunglasses = cv2.imread("images/sunglasses_4.png", cv2.IMREAD_UNCHANGED)

# Plot the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.imshow(sunglasses)
ax1.axis('off');

This image is placed over each individual's face using the detected eye points to determine the location of the sunglasses, and eyebrow points to determine the size that the sunglasses should be for each person (one could also use the nose point to determine this).

Notice that this image actually has 4 channels, not just 3.

In [63]:
# Print out the shape of the sunglasses image
print ('The sunglasses image has shape: ' + str(np.shape(sunglasses)))
The sunglasses image has shape: (1123, 3064, 4)

It has the usual red, blue, and green channels any color image has, with the 4th channel representing the transparency level of each pixel in the image. Here's how the transparency channel works: the lower the value, the more transparent the pixel will become. The lower bound (completely transparent) is zero here, so any pixels set to 0 will not be seen.

This is how we can place this image of sunglasses on someone's face and still see the area around of their face where the sunglasses lie - because these pixels in the sunglasses image have been made completely transparent.

Lets check out the alpha channel of our sunglasses image in the next Python cell. Note because many of the pixels near the boundary are transparent we'll need to explicitly print out non-zero values if we want to see them.

In [64]:
# Print out the sunglasses transparency (alpha) channel
alpha_channel = sunglasses[:,:,3]
print ('the alpha channel here looks like')
print (alpha_channel)

# Just to double check that there are indeed non-zero values
# Let's find and print out every value greater than zero
values = np.where(alpha_channel != 0)
print ('\n the non-zero values of the alpha channel look like')
print (values)
the alpha channel here looks like
[[0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 ..., 
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]]

 the non-zero values of the alpha channel look like
(array([  17,   17,   17, ..., 1109, 1109, 1109], dtype=int64), array([ 687,  688,  689, ..., 2376, 2377, 2378], dtype=int64))

This means that when we place this sunglasses image on top of another image, we can use the transparency channel as a filter to tell us which pixels to overlay on a new image (only the non-transparent ones with values greater than zero).

One last thing: it's helpful to understand which keypoint belongs to the eyes, mouth, etc. So, in the image below, we also display the index of each facial keypoint directly on the image so that you can tell which keypoints are for the eyes, eyebrows, etc.

With this information, you're well on your way to completing this filtering task! See if you can place the sunglasses automatically on the individuals in the image loaded in / shown in the next Python cell.

In [65]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)


# Plot the image
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[65]:
<matplotlib.image.AxesImage at 0x54bb7198>
In [81]:
def put_sunglasses(img):
    image_copy = np.copy(img)

    _, faces, points = detect_points(image_copy, 1.1, 3)
    
    if len(faces) < 1:
        return image_copy
    
    b_channel, g_channel, r_channel = cv2.split(image_copy)
    alpha_channel = np.ones(b_channel.shape, dtype=b_channel.dtype) * 255
    image_copy = cv2.merge((b_channel, g_channel, r_channel, alpha_channel))
    
    ratio = np.shape(sunglasses)[1]*1.0/np.shape(sunglasses)[0] # width/height

    X_SCALE = 1.1  # sunglasses should be a bit wider, than eyes
    Y_SCALE = 0.15 

    face_count = 0
    for (x,y,w,h) in faces:
        width = points[face_count][0][7] - points[face_count][0][9]
        width = int(width*X_SCALE)  
        height = int(width/ratio)
        eye_x = int(points[face_count][0][9] - width * ((X_SCALE-1) / 2) ) 
        eye_y = int(points[face_count][1][9] - height * Y_SCALE) 


        sg_copy = np.copy(sunglasses)
        sg_copy = cv2.resize(sg_copy, (width, height))

        for xi in range(eye_x, eye_x+width):
            for yi in range(eye_y, eye_y+height):
                if sg_copy[yi-eye_y ,xi-eye_x, 3] != 0:
                    image_copy[yi, xi] = sg_copy[yi-eye_y, xi-eye_x]

        face_count+=1
    
    return image_copy

image_copy = put_sunglasses(image)
    
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Image with sunglasses')
ax1.imshow(image_copy)    
Out[81]:
<matplotlib.image.AxesImage at 0x5494c358>

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add the sunglasses filter to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for adding sunglasses to someone's face in the previous optional exercise and you should be good to go!

In [82]:
import cv2
import time 
from keras.models import load_model
import numpy as np

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep video stream open
    while rval:
        # Plot image from camera with detections marked
        image_with_sunglasses = put_sunglasses(frame)
        
        cv2.imshow("face detection activated", image_with_sunglasses)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key < 255: # exit by pressing any key
            # Destroy windows 
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [83]:
# Load facial landmark detector model
# model = load_model('my_model.h5')

# Run sunglasses painter
laptop_camera_go()
In [ ]: